00:00:00.001 Started by upstream project "autotest-per-patch" build number 132828 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.095 The recommended git tool is: git 00:00:00.095 using credential 00000000-0000-0000-0000-000000000002 00:00:00.097 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.176 Fetching changes from the remote Git repository 00:00:00.177 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.252 Using shallow fetch with depth 1 00:00:00.252 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.252 > git --version # timeout=10 00:00:00.312 > git --version # 'git version 2.39.2' 00:00:00.312 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.365 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.365 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.422 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.433 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.447 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.447 > git config core.sparsecheckout # timeout=10 00:00:07.459 > git read-tree -mu HEAD # timeout=10 00:00:07.473 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.495 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.495 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.624 [Pipeline] Start of Pipeline 00:00:07.635 [Pipeline] library 00:00:07.636 Loading library shm_lib@master 00:00:07.636 Library shm_lib@master is cached. Copying from home. 00:00:07.651 [Pipeline] node 00:00:07.666 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:00:07.667 [Pipeline] { 00:00:07.677 [Pipeline] catchError 00:00:07.678 [Pipeline] { 00:00:07.687 [Pipeline] wrap 00:00:07.694 [Pipeline] { 00:00:07.701 [Pipeline] stage 00:00:07.703 [Pipeline] { (Prologue) 00:00:07.929 [Pipeline] sh 00:00:08.215 + logger -p user.info -t JENKINS-CI 00:00:08.233 [Pipeline] echo 00:00:08.235 Node: WFP8 00:00:08.243 [Pipeline] sh 00:00:08.552 [Pipeline] setCustomBuildProperty 00:00:08.568 [Pipeline] echo 00:00:08.570 Cleanup processes 00:00:08.576 [Pipeline] sh 00:00:08.862 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:08.862 1352676 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:08.876 [Pipeline] sh 00:00:09.162 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:09.162 ++ grep -v 'sudo pgrep' 00:00:09.162 ++ awk '{print $1}' 00:00:09.162 + sudo kill -9 00:00:09.162 + true 00:00:09.177 [Pipeline] cleanWs 00:00:09.186 [WS-CLEANUP] Deleting project workspace... 00:00:09.186 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.192 [WS-CLEANUP] done 00:00:09.197 [Pipeline] setCustomBuildProperty 00:00:09.209 [Pipeline] sh 00:00:09.489 + sudo git config --global --replace-all safe.directory '*' 00:00:09.585 [Pipeline] httpRequest 00:00:10.030 [Pipeline] echo 00:00:10.032 Sorcerer 10.211.164.112 is alive 00:00:10.042 [Pipeline] retry 00:00:10.044 [Pipeline] { 00:00:10.058 [Pipeline] httpRequest 00:00:10.062 HttpMethod: GET 00:00:10.063 URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.063 Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.072 Response Code: HTTP/1.1 200 OK 00:00:10.073 Success: Status code 200 is in the accepted range: 200,404 00:00:10.073 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.200 [Pipeline] } 00:00:19.219 [Pipeline] // retry 00:00:19.226 [Pipeline] sh 00:00:19.511 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.527 [Pipeline] httpRequest 00:00:19.918 [Pipeline] echo 00:00:19.920 Sorcerer 10.211.164.112 is alive 00:00:19.929 [Pipeline] retry 00:00:19.931 [Pipeline] { 00:00:20.069 [Pipeline] httpRequest 00:00:20.073 HttpMethod: GET 00:00:20.073 URL: http://10.211.164.112/packages/spdk_92d1e663afe5048334744edf8d98e5b9a54a794a.tar.gz 00:00:20.074 Sending request to url: http://10.211.164.112/packages/spdk_92d1e663afe5048334744edf8d98e5b9a54a794a.tar.gz 00:00:20.081 Response Code: HTTP/1.1 200 OK 00:00:20.081 Success: Status code 200 is in the accepted range: 200,404 00:00:20.081 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk_92d1e663afe5048334744edf8d98e5b9a54a794a.tar.gz 00:02:18.293 [Pipeline] } 00:02:18.307 [Pipeline] // retry 00:02:18.314 [Pipeline] sh 00:02:18.600 + tar --no-same-owner -xf spdk_92d1e663afe5048334744edf8d98e5b9a54a794a.tar.gz 00:02:21.149 [Pipeline] sh 00:02:21.434 + git -C spdk log --oneline -n5 00:02:21.434 92d1e663a bdev/nvme: Fix depopulating a namespace twice 00:02:21.434 52a413487 bdev: do not retry nomem I/Os during aborting them 00:02:21.434 d13942918 bdev: simplify bdev_reset_freeze_channel 00:02:21.434 0edc184ec accel/mlx5: Support mkey registration 00:02:21.434 06358c250 bdev/nvme: use poll_group's fd_group to register interrupts 00:02:21.445 [Pipeline] } 00:02:21.459 [Pipeline] // stage 00:02:21.468 [Pipeline] stage 00:02:21.470 [Pipeline] { (Prepare) 00:02:21.486 [Pipeline] writeFile 00:02:21.502 [Pipeline] sh 00:02:21.801 + logger -p user.info -t JENKINS-CI 00:02:21.813 [Pipeline] sh 00:02:22.098 + logger -p user.info -t JENKINS-CI 00:02:22.111 [Pipeline] sh 00:02:22.483 + cat autorun-spdk.conf 00:02:22.483 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.483 SPDK_TEST_NVMF=1 00:02:22.483 SPDK_TEST_NVME_CLI=1 00:02:22.483 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.483 SPDK_TEST_NVMF_NICS=e810 00:02:22.483 SPDK_TEST_VFIOUSER=1 00:02:22.483 SPDK_RUN_UBSAN=1 00:02:22.483 NET_TYPE=phy 00:02:22.491 RUN_NIGHTLY=0 00:02:22.496 [Pipeline] readFile 00:02:22.519 [Pipeline] withEnv 00:02:22.521 [Pipeline] { 00:02:22.537 [Pipeline] sh 00:02:22.824 + set -ex 00:02:22.824 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf ]] 00:02:22.824 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:02:22.824 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.824 ++ SPDK_TEST_NVMF=1 00:02:22.824 ++ SPDK_TEST_NVME_CLI=1 00:02:22.824 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.824 ++ SPDK_TEST_NVMF_NICS=e810 00:02:22.824 ++ SPDK_TEST_VFIOUSER=1 00:02:22.824 ++ SPDK_RUN_UBSAN=1 00:02:22.824 ++ NET_TYPE=phy 00:02:22.824 ++ RUN_NIGHTLY=0 00:02:22.824 + case $SPDK_TEST_NVMF_NICS in 00:02:22.824 + DRIVERS=ice 00:02:22.824 + [[ tcp == \r\d\m\a ]] 00:02:22.824 + [[ -n ice ]] 00:02:22.824 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:22.824 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:22.824 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:22.824 rmmod: ERROR: Module irdma is not currently loaded 00:02:22.824 rmmod: ERROR: Module i40iw is not currently loaded 00:02:22.824 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:22.824 + true 00:02:22.824 + for D in $DRIVERS 00:02:22.824 + sudo modprobe ice 00:02:22.824 + exit 0 00:02:22.833 [Pipeline] } 00:02:22.849 [Pipeline] // withEnv 00:02:22.854 [Pipeline] } 00:02:22.869 [Pipeline] // stage 00:02:22.879 [Pipeline] catchError 00:02:22.880 [Pipeline] { 00:02:22.895 [Pipeline] timeout 00:02:22.895 Timeout set to expire in 1 hr 0 min 00:02:22.897 [Pipeline] { 00:02:22.911 [Pipeline] stage 00:02:22.913 [Pipeline] { (Tests) 00:02:22.927 [Pipeline] sh 00:02:23.212 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:02:23.212 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:02:23.212 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:02:23.212 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 ]] 00:02:23.212 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:02:23.212 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output 00:02:23.212 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk ]] 00:02:23.212 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output ]] 00:02:23.212 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output 00:02:23.212 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output ]] 00:02:23.212 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:23.212 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:02:23.212 + source /etc/os-release 00:02:23.212 ++ NAME='Fedora Linux' 00:02:23.212 ++ VERSION='39 (Cloud Edition)' 00:02:23.212 ++ ID=fedora 00:02:23.212 ++ VERSION_ID=39 00:02:23.212 ++ VERSION_CODENAME= 00:02:23.212 ++ PLATFORM_ID=platform:f39 00:02:23.212 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:23.212 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:23.212 ++ LOGO=fedora-logo-icon 00:02:23.212 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:23.212 ++ HOME_URL=https://fedoraproject.org/ 00:02:23.212 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:23.212 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:23.212 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:23.212 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:23.212 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:23.212 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:23.212 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:23.212 ++ SUPPORT_END=2024-11-12 00:02:23.212 ++ VARIANT='Cloud Edition' 00:02:23.212 ++ VARIANT_ID=cloud 00:02:23.212 + uname -a 00:02:23.212 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:23.212 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh status 00:02:25.750 Hugepages 00:02:25.750 node hugesize free / total 00:02:25.750 node0 1048576kB 0 / 0 00:02:25.750 node0 2048kB 0 / 0 00:02:25.750 node1 1048576kB 0 / 0 00:02:25.750 node1 2048kB 0 / 0 00:02:25.750 00:02:25.750 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:25.750 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:25.750 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:25.750 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:25.750 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:25.750 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:25.750 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:25.750 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:25.750 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:25.750 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:25.750 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:25.750 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:25.750 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:25.750 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:25.750 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:25.750 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:25.750 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:25.750 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:25.750 + rm -f /tmp/spdk-ld-path 00:02:25.750 + source autorun-spdk.conf 00:02:25.750 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.750 ++ SPDK_TEST_NVMF=1 00:02:25.750 ++ SPDK_TEST_NVME_CLI=1 00:02:25.750 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:25.750 ++ SPDK_TEST_NVMF_NICS=e810 00:02:25.750 ++ SPDK_TEST_VFIOUSER=1 00:02:25.750 ++ SPDK_RUN_UBSAN=1 00:02:25.750 ++ NET_TYPE=phy 00:02:25.750 ++ RUN_NIGHTLY=0 00:02:25.750 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:25.750 + [[ -n '' ]] 00:02:25.750 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:02:25.750 + for M in /var/spdk/build-*-manifest.txt 00:02:25.750 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:25.750 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:02:25.750 + for M in /var/spdk/build-*-manifest.txt 00:02:25.750 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:25.750 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:02:25.750 + for M in /var/spdk/build-*-manifest.txt 00:02:25.750 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:25.751 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:02:25.751 ++ uname 00:02:25.751 + [[ Linux == \L\i\n\u\x ]] 00:02:25.751 + sudo dmesg -T 00:02:26.010 + sudo dmesg --clear 00:02:26.010 + dmesg_pid=1354116 00:02:26.010 + [[ Fedora Linux == FreeBSD ]] 00:02:26.010 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.010 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:26.010 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:26.010 + [[ -x /usr/src/fio-static/fio ]] 00:02:26.010 + export FIO_BIN=/usr/src/fio-static/fio 00:02:26.010 + FIO_BIN=/usr/src/fio-static/fio 00:02:26.010 + sudo dmesg -Tw 00:02:26.010 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\_\2\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:26.010 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:26.010 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:26.010 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.010 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:26.010 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:26.010 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.010 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:26.010 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:02:26.010 12:10:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:26.010 12:10:48 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:02:26.010 12:10:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.010 12:10:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:26.010 12:10:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:26.010 12:10:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.010 12:10:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:26.010 12:10:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:26.010 12:10:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:26.010 12:10:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:26.010 12:10:48 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:26.010 12:10:48 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:26.010 12:10:48 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:02:26.010 12:10:48 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:26.010 12:10:48 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:02:26.010 12:10:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:26.010 12:10:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:26.010 12:10:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:26.010 12:10:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:26.010 12:10:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.010 12:10:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.010 12:10:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.010 12:10:48 -- paths/export.sh@5 -- $ export PATH 00:02:26.010 12:10:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:26.010 12:10:48 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output 00:02:26.010 12:10:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:26.010 12:10:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733829048.XXXXXX 00:02:26.010 12:10:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733829048.y8bZPF 00:02:26.010 12:10:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:26.010 12:10:48 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:26.010 12:10:48 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/' 00:02:26.010 12:10:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/xnvme --exclude /tmp' 00:02:26.010 12:10:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/xnvme --exclude /tmp --status-bugs' 00:02:26.010 12:10:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:26.010 12:10:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:26.010 12:10:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:26.010 12:10:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:26.010 12:10:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:26.010 12:10:48 -- pm/common@17 -- $ local monitor 00:02:26.010 12:10:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.010 12:10:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.010 12:10:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.010 12:10:48 -- pm/common@21 -- $ date +%s 00:02:26.011 12:10:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.011 12:10:48 -- pm/common@21 -- $ date +%s 00:02:26.011 12:10:48 -- pm/common@25 -- $ sleep 1 00:02:26.011 12:10:48 -- pm/common@21 -- $ date +%s 00:02:26.011 12:10:48 -- pm/common@21 -- $ date +%s 00:02:26.011 12:10:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733829048 00:02:26.011 12:10:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733829048 00:02:26.011 12:10:48 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733829048 00:02:26.011 12:10:48 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733829048 00:02:26.270 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733829048_collect-cpu-load.pm.log 00:02:26.270 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733829048_collect-vmstat.pm.log 00:02:26.270 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733829048_collect-cpu-temp.pm.log 00:02:26.270 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733829048_collect-bmc-pm.bmc.pm.log 00:02:27.208 12:10:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:27.208 12:10:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:27.208 12:10:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:27.208 12:10:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:02:27.208 12:10:49 -- spdk/autobuild.sh@16 -- $ date -u 00:02:27.208 Tue Dec 10 11:10:49 AM UTC 2024 00:02:27.208 12:10:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:27.208 v25.01-pre-325-g92d1e663a 00:02:27.208 12:10:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:27.208 12:10:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:27.208 12:10:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:27.208 12:10:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:27.208 12:10:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:27.208 12:10:49 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.208 ************************************ 00:02:27.208 START TEST ubsan 00:02:27.208 ************************************ 00:02:27.208 12:10:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:27.208 using ubsan 00:02:27.208 00:02:27.208 real 0m0.000s 00:02:27.208 user 0m0.000s 00:02:27.208 sys 0m0.000s 00:02:27.208 12:10:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:27.208 12:10:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:27.208 ************************************ 00:02:27.208 END TEST ubsan 00:02:27.208 ************************************ 00:02:27.208 12:10:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:27.208 12:10:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:27.208 12:10:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:27.208 12:10:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:27.208 12:10:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:27.208 12:10:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:27.208 12:10:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:27.208 12:10:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:27.208 12:10:49 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:27.468 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:02:27.468 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:02:27.727 Using 'verbs' RDMA provider 00:02:40.561 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.spdk-isal.log)...done. 00:02:52.776 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.spdk-isal-crypto.log)...done. 00:02:52.776 Creating mk/config.mk...done. 00:02:52.776 Creating mk/cc.flags.mk...done. 00:02:52.776 Type 'make' to build. 00:02:52.776 12:11:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:52.776 12:11:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:52.776 12:11:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:52.776 12:11:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.776 ************************************ 00:02:52.776 START TEST make 00:02:52.776 ************************************ 00:02:52.776 12:11:14 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:53.035 make[1]: Nothing to be done for 'all'. 00:02:54.415 The Meson build system 00:02:54.415 Version: 1.5.0 00:02:54.415 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user 00:02:54.415 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:02:54.415 Build type: native build 00:02:54.415 Project name: libvfio-user 00:02:54.415 Project version: 0.0.1 00:02:54.415 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:54.415 C linker for the host machine: cc ld.bfd 2.40-14 00:02:54.415 Host machine cpu family: x86_64 00:02:54.415 Host machine cpu: x86_64 00:02:54.415 Run-time dependency threads found: YES 00:02:54.415 Library dl found: YES 00:02:54.415 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:54.415 Run-time dependency json-c found: YES 0.17 00:02:54.415 Run-time dependency cmocka found: YES 1.1.7 00:02:54.415 Program pytest-3 found: NO 00:02:54.415 Program flake8 found: NO 00:02:54.415 Program misspell-fixer found: NO 00:02:54.415 Program restructuredtext-lint found: NO 00:02:54.415 Program valgrind found: YES (/usr/bin/valgrind) 00:02:54.415 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:54.415 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:54.415 Compiler for C supports arguments -Wwrite-strings: YES 00:02:54.415 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:54.415 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user/test/test-lspci.sh) 00:02:54.415 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user/test/test-linkage.sh) 00:02:54.415 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:54.415 Build targets in project: 8 00:02:54.415 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:54.415 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:54.415 00:02:54.415 libvfio-user 0.0.1 00:02:54.415 00:02:54.415 User defined options 00:02:54.415 buildtype : debug 00:02:54.415 default_library: shared 00:02:54.415 libdir : /usr/local/lib 00:02:54.415 00:02:54.415 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:54.981 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug' 00:02:54.981 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:54.981 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:54.981 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:54.981 [4/37] Compiling C object samples/null.p/null.c.o 00:02:54.981 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:54.981 [6/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:54.981 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:54.981 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:54.981 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:54.981 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:54.981 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:54.981 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:54.981 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:54.981 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:54.981 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:54.981 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:54.981 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:54.981 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:54.981 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:54.981 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:55.239 [21/37] Compiling C object samples/client.p/client.c.o 00:02:55.239 [22/37] Compiling C object samples/server.p/server.c.o 00:02:55.239 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:55.239 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:55.239 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:55.239 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:55.239 [27/37] Linking target samples/client 00:02:55.239 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:55.239 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:55.239 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:55.239 [31/37] Linking target test/unit_tests 00:02:55.499 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:55.499 [33/37] Linking target samples/server 00:02:55.499 [34/37] Linking target samples/null 00:02:55.499 [35/37] Linking target samples/gpio-pci-idio-16 00:02:55.499 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:55.499 [37/37] Linking target samples/lspci 00:02:55.499 INFO: autodetecting backend as ninja 00:02:55.499 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:02:55.499 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:02:55.757 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug' 00:02:55.757 ninja: no work to do. 00:03:01.031 The Meson build system 00:03:01.031 Version: 1.5.0 00:03:01.031 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk 00:03:01.031 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp 00:03:01.031 Build type: native build 00:03:01.031 Program cat found: YES (/usr/bin/cat) 00:03:01.031 Project name: DPDK 00:03:01.031 Project version: 24.03.0 00:03:01.031 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:01.031 C linker for the host machine: cc ld.bfd 2.40-14 00:03:01.031 Host machine cpu family: x86_64 00:03:01.031 Host machine cpu: x86_64 00:03:01.031 Message: ## Building in Developer Mode ## 00:03:01.031 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:01.031 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/buildtools/check-symbols.sh) 00:03:01.031 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:01.031 Program python3 found: YES (/usr/bin/python3) 00:03:01.031 Program cat found: YES (/usr/bin/cat) 00:03:01.031 Compiler for C supports arguments -march=native: YES 00:03:01.031 Checking for size of "void *" : 8 00:03:01.031 Checking for size of "void *" : 8 (cached) 00:03:01.031 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:01.031 Library m found: YES 00:03:01.031 Library numa found: YES 00:03:01.031 Has header "numaif.h" : YES 00:03:01.031 Library fdt found: NO 00:03:01.031 Library execinfo found: NO 00:03:01.031 Has header "execinfo.h" : YES 00:03:01.031 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:01.031 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:01.031 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:01.031 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:01.031 Run-time dependency openssl found: YES 3.1.1 00:03:01.031 Run-time dependency libpcap found: YES 1.10.4 00:03:01.031 Has header "pcap.h" with dependency libpcap: YES 00:03:01.031 Compiler for C supports arguments -Wcast-qual: YES 00:03:01.031 Compiler for C supports arguments -Wdeprecated: YES 00:03:01.031 Compiler for C supports arguments -Wformat: YES 00:03:01.031 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:01.031 Compiler for C supports arguments -Wformat-security: NO 00:03:01.031 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.031 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:01.031 Compiler for C supports arguments -Wnested-externs: YES 00:03:01.031 Compiler for C supports arguments -Wold-style-definition: YES 00:03:01.031 Compiler for C supports arguments -Wpointer-arith: YES 00:03:01.031 Compiler for C supports arguments -Wsign-compare: YES 00:03:01.031 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:01.031 Compiler for C supports arguments -Wundef: YES 00:03:01.031 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.031 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:01.031 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:01.031 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.031 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:01.031 Program objdump found: YES (/usr/bin/objdump) 00:03:01.032 Compiler for C supports arguments -mavx512f: YES 00:03:01.032 Checking if "AVX512 checking" compiles: YES 00:03:01.032 Fetching value of define "__SSE4_2__" : 1 00:03:01.032 Fetching value of define "__AES__" : 1 00:03:01.032 Fetching value of define "__AVX__" : 1 00:03:01.032 Fetching value of define "__AVX2__" : 1 00:03:01.032 Fetching value of define "__AVX512BW__" : 1 00:03:01.032 Fetching value of define "__AVX512CD__" : 1 00:03:01.032 Fetching value of define "__AVX512DQ__" : 1 00:03:01.032 Fetching value of define "__AVX512F__" : 1 00:03:01.032 Fetching value of define "__AVX512VL__" : 1 00:03:01.032 Fetching value of define "__PCLMUL__" : 1 00:03:01.032 Fetching value of define "__RDRND__" : 1 00:03:01.032 Fetching value of define "__RDSEED__" : 1 00:03:01.032 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:01.032 Fetching value of define "__znver1__" : (undefined) 00:03:01.032 Fetching value of define "__znver2__" : (undefined) 00:03:01.032 Fetching value of define "__znver3__" : (undefined) 00:03:01.032 Fetching value of define "__znver4__" : (undefined) 00:03:01.032 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:01.032 Message: lib/log: Defining dependency "log" 00:03:01.032 Message: lib/kvargs: Defining dependency "kvargs" 00:03:01.032 Message: lib/telemetry: Defining dependency "telemetry" 00:03:01.032 Checking for function "getentropy" : NO 00:03:01.032 Message: lib/eal: Defining dependency "eal" 00:03:01.032 Message: lib/ring: Defining dependency "ring" 00:03:01.032 Message: lib/rcu: Defining dependency "rcu" 00:03:01.032 Message: lib/mempool: Defining dependency "mempool" 00:03:01.032 Message: lib/mbuf: Defining dependency "mbuf" 00:03:01.032 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:01.032 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:01.032 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:01.032 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:01.032 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:01.032 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:01.032 Compiler for C supports arguments -mpclmul: YES 00:03:01.032 Compiler for C supports arguments -maes: YES 00:03:01.032 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:01.032 Compiler for C supports arguments -mavx512bw: YES 00:03:01.032 Compiler for C supports arguments -mavx512dq: YES 00:03:01.032 Compiler for C supports arguments -mavx512vl: YES 00:03:01.032 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:01.032 Compiler for C supports arguments -mavx2: YES 00:03:01.032 Compiler for C supports arguments -mavx: YES 00:03:01.032 Message: lib/net: Defining dependency "net" 00:03:01.032 Message: lib/meter: Defining dependency "meter" 00:03:01.032 Message: lib/ethdev: Defining dependency "ethdev" 00:03:01.032 Message: lib/pci: Defining dependency "pci" 00:03:01.032 Message: lib/cmdline: Defining dependency "cmdline" 00:03:01.032 Message: lib/hash: Defining dependency "hash" 00:03:01.032 Message: lib/timer: Defining dependency "timer" 00:03:01.032 Message: lib/compressdev: Defining dependency "compressdev" 00:03:01.032 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:01.032 Message: lib/dmadev: Defining dependency "dmadev" 00:03:01.032 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:01.032 Message: lib/power: Defining dependency "power" 00:03:01.032 Message: lib/reorder: Defining dependency "reorder" 00:03:01.032 Message: lib/security: Defining dependency "security" 00:03:01.032 Has header "linux/userfaultfd.h" : YES 00:03:01.032 Has header "linux/vduse.h" : YES 00:03:01.032 Message: lib/vhost: Defining dependency "vhost" 00:03:01.032 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:01.032 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:01.032 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:01.032 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:01.032 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:01.032 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:01.032 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:01.032 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:01.032 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:01.032 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:01.032 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:01.032 Configuring doxy-api-html.conf using configuration 00:03:01.032 Configuring doxy-api-man.conf using configuration 00:03:01.032 Program mandb found: YES (/usr/bin/mandb) 00:03:01.032 Program sphinx-build found: NO 00:03:01.032 Configuring rte_build_config.h using configuration 00:03:01.032 Message: 00:03:01.032 ================= 00:03:01.032 Applications Enabled 00:03:01.032 ================= 00:03:01.032 00:03:01.032 apps: 00:03:01.032 00:03:01.032 00:03:01.032 Message: 00:03:01.032 ================= 00:03:01.032 Libraries Enabled 00:03:01.032 ================= 00:03:01.032 00:03:01.032 libs: 00:03:01.032 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:01.032 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:01.032 cryptodev, dmadev, power, reorder, security, vhost, 00:03:01.032 00:03:01.032 Message: 00:03:01.032 =============== 00:03:01.032 Drivers Enabled 00:03:01.032 =============== 00:03:01.032 00:03:01.032 common: 00:03:01.032 00:03:01.032 bus: 00:03:01.032 pci, vdev, 00:03:01.032 mempool: 00:03:01.032 ring, 00:03:01.032 dma: 00:03:01.032 00:03:01.032 net: 00:03:01.032 00:03:01.032 crypto: 00:03:01.032 00:03:01.032 compress: 00:03:01.032 00:03:01.032 vdpa: 00:03:01.032 00:03:01.032 00:03:01.032 Message: 00:03:01.032 ================= 00:03:01.032 Content Skipped 00:03:01.032 ================= 00:03:01.032 00:03:01.032 apps: 00:03:01.032 dumpcap: explicitly disabled via build config 00:03:01.032 graph: explicitly disabled via build config 00:03:01.032 pdump: explicitly disabled via build config 00:03:01.032 proc-info: explicitly disabled via build config 00:03:01.032 test-acl: explicitly disabled via build config 00:03:01.032 test-bbdev: explicitly disabled via build config 00:03:01.032 test-cmdline: explicitly disabled via build config 00:03:01.032 test-compress-perf: explicitly disabled via build config 00:03:01.032 test-crypto-perf: explicitly disabled via build config 00:03:01.032 test-dma-perf: explicitly disabled via build config 00:03:01.032 test-eventdev: explicitly disabled via build config 00:03:01.032 test-fib: explicitly disabled via build config 00:03:01.032 test-flow-perf: explicitly disabled via build config 00:03:01.032 test-gpudev: explicitly disabled via build config 00:03:01.032 test-mldev: explicitly disabled via build config 00:03:01.032 test-pipeline: explicitly disabled via build config 00:03:01.032 test-pmd: explicitly disabled via build config 00:03:01.032 test-regex: explicitly disabled via build config 00:03:01.032 test-sad: explicitly disabled via build config 00:03:01.032 test-security-perf: explicitly disabled via build config 00:03:01.032 00:03:01.032 libs: 00:03:01.032 argparse: explicitly disabled via build config 00:03:01.032 metrics: explicitly disabled via build config 00:03:01.032 acl: explicitly disabled via build config 00:03:01.032 bbdev: explicitly disabled via build config 00:03:01.032 bitratestats: explicitly disabled via build config 00:03:01.032 bpf: explicitly disabled via build config 00:03:01.032 cfgfile: explicitly disabled via build config 00:03:01.032 distributor: explicitly disabled via build config 00:03:01.032 efd: explicitly disabled via build config 00:03:01.032 eventdev: explicitly disabled via build config 00:03:01.032 dispatcher: explicitly disabled via build config 00:03:01.032 gpudev: explicitly disabled via build config 00:03:01.032 gro: explicitly disabled via build config 00:03:01.032 gso: explicitly disabled via build config 00:03:01.032 ip_frag: explicitly disabled via build config 00:03:01.032 jobstats: explicitly disabled via build config 00:03:01.032 latencystats: explicitly disabled via build config 00:03:01.032 lpm: explicitly disabled via build config 00:03:01.032 member: explicitly disabled via build config 00:03:01.032 pcapng: explicitly disabled via build config 00:03:01.032 rawdev: explicitly disabled via build config 00:03:01.032 regexdev: explicitly disabled via build config 00:03:01.032 mldev: explicitly disabled via build config 00:03:01.032 rib: explicitly disabled via build config 00:03:01.032 sched: explicitly disabled via build config 00:03:01.032 stack: explicitly disabled via build config 00:03:01.032 ipsec: explicitly disabled via build config 00:03:01.032 pdcp: explicitly disabled via build config 00:03:01.032 fib: explicitly disabled via build config 00:03:01.032 port: explicitly disabled via build config 00:03:01.032 pdump: explicitly disabled via build config 00:03:01.032 table: explicitly disabled via build config 00:03:01.032 pipeline: explicitly disabled via build config 00:03:01.032 graph: explicitly disabled via build config 00:03:01.032 node: explicitly disabled via build config 00:03:01.032 00:03:01.032 drivers: 00:03:01.032 common/cpt: not in enabled drivers build config 00:03:01.032 common/dpaax: not in enabled drivers build config 00:03:01.032 common/iavf: not in enabled drivers build config 00:03:01.032 common/idpf: not in enabled drivers build config 00:03:01.032 common/ionic: not in enabled drivers build config 00:03:01.032 common/mvep: not in enabled drivers build config 00:03:01.032 common/octeontx: not in enabled drivers build config 00:03:01.032 bus/auxiliary: not in enabled drivers build config 00:03:01.032 bus/cdx: not in enabled drivers build config 00:03:01.032 bus/dpaa: not in enabled drivers build config 00:03:01.032 bus/fslmc: not in enabled drivers build config 00:03:01.032 bus/ifpga: not in enabled drivers build config 00:03:01.032 bus/platform: not in enabled drivers build config 00:03:01.032 bus/uacce: not in enabled drivers build config 00:03:01.032 bus/vmbus: not in enabled drivers build config 00:03:01.032 common/cnxk: not in enabled drivers build config 00:03:01.032 common/mlx5: not in enabled drivers build config 00:03:01.032 common/nfp: not in enabled drivers build config 00:03:01.032 common/nitrox: not in enabled drivers build config 00:03:01.032 common/qat: not in enabled drivers build config 00:03:01.032 common/sfc_efx: not in enabled drivers build config 00:03:01.032 mempool/bucket: not in enabled drivers build config 00:03:01.032 mempool/cnxk: not in enabled drivers build config 00:03:01.032 mempool/dpaa: not in enabled drivers build config 00:03:01.032 mempool/dpaa2: not in enabled drivers build config 00:03:01.032 mempool/octeontx: not in enabled drivers build config 00:03:01.032 mempool/stack: not in enabled drivers build config 00:03:01.033 dma/cnxk: not in enabled drivers build config 00:03:01.033 dma/dpaa: not in enabled drivers build config 00:03:01.033 dma/dpaa2: not in enabled drivers build config 00:03:01.033 dma/hisilicon: not in enabled drivers build config 00:03:01.033 dma/idxd: not in enabled drivers build config 00:03:01.033 dma/ioat: not in enabled drivers build config 00:03:01.033 dma/skeleton: not in enabled drivers build config 00:03:01.033 net/af_packet: not in enabled drivers build config 00:03:01.033 net/af_xdp: not in enabled drivers build config 00:03:01.033 net/ark: not in enabled drivers build config 00:03:01.033 net/atlantic: not in enabled drivers build config 00:03:01.033 net/avp: not in enabled drivers build config 00:03:01.033 net/axgbe: not in enabled drivers build config 00:03:01.033 net/bnx2x: not in enabled drivers build config 00:03:01.033 net/bnxt: not in enabled drivers build config 00:03:01.033 net/bonding: not in enabled drivers build config 00:03:01.033 net/cnxk: not in enabled drivers build config 00:03:01.033 net/cpfl: not in enabled drivers build config 00:03:01.033 net/cxgbe: not in enabled drivers build config 00:03:01.033 net/dpaa: not in enabled drivers build config 00:03:01.033 net/dpaa2: not in enabled drivers build config 00:03:01.033 net/e1000: not in enabled drivers build config 00:03:01.033 net/ena: not in enabled drivers build config 00:03:01.033 net/enetc: not in enabled drivers build config 00:03:01.033 net/enetfec: not in enabled drivers build config 00:03:01.033 net/enic: not in enabled drivers build config 00:03:01.033 net/failsafe: not in enabled drivers build config 00:03:01.033 net/fm10k: not in enabled drivers build config 00:03:01.033 net/gve: not in enabled drivers build config 00:03:01.033 net/hinic: not in enabled drivers build config 00:03:01.033 net/hns3: not in enabled drivers build config 00:03:01.033 net/i40e: not in enabled drivers build config 00:03:01.033 net/iavf: not in enabled drivers build config 00:03:01.033 net/ice: not in enabled drivers build config 00:03:01.033 net/idpf: not in enabled drivers build config 00:03:01.033 net/igc: not in enabled drivers build config 00:03:01.033 net/ionic: not in enabled drivers build config 00:03:01.033 net/ipn3ke: not in enabled drivers build config 00:03:01.033 net/ixgbe: not in enabled drivers build config 00:03:01.033 net/mana: not in enabled drivers build config 00:03:01.033 net/memif: not in enabled drivers build config 00:03:01.033 net/mlx4: not in enabled drivers build config 00:03:01.033 net/mlx5: not in enabled drivers build config 00:03:01.033 net/mvneta: not in enabled drivers build config 00:03:01.033 net/mvpp2: not in enabled drivers build config 00:03:01.033 net/netvsc: not in enabled drivers build config 00:03:01.033 net/nfb: not in enabled drivers build config 00:03:01.033 net/nfp: not in enabled drivers build config 00:03:01.033 net/ngbe: not in enabled drivers build config 00:03:01.033 net/null: not in enabled drivers build config 00:03:01.033 net/octeontx: not in enabled drivers build config 00:03:01.033 net/octeon_ep: not in enabled drivers build config 00:03:01.033 net/pcap: not in enabled drivers build config 00:03:01.033 net/pfe: not in enabled drivers build config 00:03:01.033 net/qede: not in enabled drivers build config 00:03:01.033 net/ring: not in enabled drivers build config 00:03:01.033 net/sfc: not in enabled drivers build config 00:03:01.033 net/softnic: not in enabled drivers build config 00:03:01.033 net/tap: not in enabled drivers build config 00:03:01.033 net/thunderx: not in enabled drivers build config 00:03:01.033 net/txgbe: not in enabled drivers build config 00:03:01.033 net/vdev_netvsc: not in enabled drivers build config 00:03:01.033 net/vhost: not in enabled drivers build config 00:03:01.033 net/virtio: not in enabled drivers build config 00:03:01.033 net/vmxnet3: not in enabled drivers build config 00:03:01.033 raw/*: missing internal dependency, "rawdev" 00:03:01.033 crypto/armv8: not in enabled drivers build config 00:03:01.033 crypto/bcmfs: not in enabled drivers build config 00:03:01.033 crypto/caam_jr: not in enabled drivers build config 00:03:01.033 crypto/ccp: not in enabled drivers build config 00:03:01.033 crypto/cnxk: not in enabled drivers build config 00:03:01.033 crypto/dpaa_sec: not in enabled drivers build config 00:03:01.033 crypto/dpaa2_sec: not in enabled drivers build config 00:03:01.033 crypto/ipsec_mb: not in enabled drivers build config 00:03:01.033 crypto/mlx5: not in enabled drivers build config 00:03:01.033 crypto/mvsam: not in enabled drivers build config 00:03:01.033 crypto/nitrox: not in enabled drivers build config 00:03:01.033 crypto/null: not in enabled drivers build config 00:03:01.033 crypto/octeontx: not in enabled drivers build config 00:03:01.033 crypto/openssl: not in enabled drivers build config 00:03:01.033 crypto/scheduler: not in enabled drivers build config 00:03:01.033 crypto/uadk: not in enabled drivers build config 00:03:01.033 crypto/virtio: not in enabled drivers build config 00:03:01.033 compress/isal: not in enabled drivers build config 00:03:01.033 compress/mlx5: not in enabled drivers build config 00:03:01.033 compress/nitrox: not in enabled drivers build config 00:03:01.033 compress/octeontx: not in enabled drivers build config 00:03:01.033 compress/zlib: not in enabled drivers build config 00:03:01.033 regex/*: missing internal dependency, "regexdev" 00:03:01.033 ml/*: missing internal dependency, "mldev" 00:03:01.033 vdpa/ifc: not in enabled drivers build config 00:03:01.033 vdpa/mlx5: not in enabled drivers build config 00:03:01.033 vdpa/nfp: not in enabled drivers build config 00:03:01.033 vdpa/sfc: not in enabled drivers build config 00:03:01.033 event/*: missing internal dependency, "eventdev" 00:03:01.033 baseband/*: missing internal dependency, "bbdev" 00:03:01.033 gpu/*: missing internal dependency, "gpudev" 00:03:01.033 00:03:01.033 00:03:01.292 Build targets in project: 85 00:03:01.292 00:03:01.292 DPDK 24.03.0 00:03:01.292 00:03:01.292 User defined options 00:03:01.292 buildtype : debug 00:03:01.292 default_library : shared 00:03:01.292 libdir : lib 00:03:01.292 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:03:01.292 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:01.292 c_link_args : 00:03:01.292 cpu_instruction_set: native 00:03:01.292 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:03:01.292 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:03:01.292 enable_docs : false 00:03:01.292 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:01.292 enable_kmods : false 00:03:01.292 max_lcores : 128 00:03:01.292 tests : false 00:03:01.292 00:03:01.292 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:01.551 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp' 00:03:01.820 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:01.820 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:01.820 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:01.820 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:01.820 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:01.820 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:01.820 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:01.820 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:01.820 [9/268] Linking static target lib/librte_kvargs.a 00:03:01.820 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:01.820 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:01.820 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:01.820 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:01.820 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:01.820 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:01.820 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:01.820 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:01.820 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:02.079 [19/268] Linking static target lib/librte_log.a 00:03:02.079 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:02.079 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:02.079 [22/268] Linking static target lib/librte_pci.a 00:03:02.079 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:02.079 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:02.079 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:02.079 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:02.340 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:02.340 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:02.340 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:02.340 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:02.340 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:02.340 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:02.340 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:02.340 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:02.340 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:02.340 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:02.340 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:02.340 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:02.340 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:02.340 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:02.340 [41/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:02.340 [42/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:02.340 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:02.340 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:02.340 [45/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:02.340 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:02.340 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:02.340 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:02.340 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:02.340 [50/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:02.340 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:02.340 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:02.340 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:02.340 [54/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:02.340 [55/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:02.340 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:02.340 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:02.340 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:02.340 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:02.340 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:02.340 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:02.340 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:02.340 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:02.340 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:02.340 [65/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:02.340 [66/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:02.340 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:02.340 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:02.340 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:02.340 [70/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:02.340 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:02.340 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:02.340 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:02.340 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:02.340 [75/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:02.340 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:02.340 [77/268] Linking static target lib/librte_ring.a 00:03:02.340 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:02.340 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:02.340 [80/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:02.340 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:02.340 [82/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:02.340 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:02.340 [84/268] Linking static target lib/librte_meter.a 00:03:02.340 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:02.340 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:02.340 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:02.340 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:02.340 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:02.340 [90/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:02.340 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:02.340 [92/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:02.340 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:02.340 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:02.340 [95/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:02.340 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:02.340 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:02.341 [98/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:02.599 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:02.599 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:02.599 [101/268] Linking static target lib/librte_telemetry.a 00:03:02.599 [102/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:02.599 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:02.599 [104/268] Linking static target lib/librte_mempool.a 00:03:02.599 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:02.599 [106/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:02.599 [107/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.599 [108/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:02.599 [109/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.599 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:02.599 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:02.599 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:02.599 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:02.599 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:02.599 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:02.599 [116/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:02.599 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:02.599 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:02.599 [119/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:02.599 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:02.599 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:02.599 [122/268] Linking static target lib/librte_net.a 00:03:02.599 [123/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:02.599 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:02.599 [125/268] Linking static target lib/librte_eal.a 00:03:02.599 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:02.599 [127/268] Linking static target lib/librte_rcu.a 00:03:02.599 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:02.599 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:02.599 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:02.599 [131/268] Linking static target lib/librte_cmdline.a 00:03:02.599 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:02.599 [133/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.599 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:02.599 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:02.599 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:02.599 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.599 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.599 [139/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:02.858 [140/268] Linking target lib/librte_log.so.24.1 00:03:02.858 [141/268] Linking static target lib/librte_mbuf.a 00:03:02.858 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:02.858 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:02.858 [144/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:02.858 [145/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:02.858 [146/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:02.859 [147/268] Linking static target lib/librte_timer.a 00:03:02.859 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:02.859 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:02.859 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:02.859 [151/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:02.859 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:02.859 [153/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:02.859 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:02.859 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:02.859 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.859 [157/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:02.859 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:02.859 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:02.859 [160/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.859 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:02.859 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:02.859 [163/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:02.859 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:02.859 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:02.859 [166/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.859 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:02.859 [168/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:02.859 [169/268] Linking static target lib/librte_reorder.a 00:03:02.859 [170/268] Linking target lib/librte_kvargs.so.24.1 00:03:02.859 [171/268] Linking target lib/librte_telemetry.so.24.1 00:03:02.859 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:02.859 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:02.859 [174/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:02.859 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:02.859 [176/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:02.859 [177/268] Linking static target lib/librte_power.a 00:03:02.859 [178/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:02.859 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:02.859 [180/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:02.859 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:02.859 [182/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:02.859 [183/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:02.859 [184/268] Linking static target lib/librte_dmadev.a 00:03:02.859 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:02.859 [186/268] Linking static target lib/librte_security.a 00:03:02.859 [187/268] Linking static target lib/librte_compressdev.a 00:03:02.859 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:03.118 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:03.118 [190/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:03.118 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.118 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:03.118 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:03.118 [194/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:03.118 [195/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:03.118 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:03.118 [197/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.118 [198/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.118 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:03.118 [200/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:03.118 [201/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:03.118 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:03.118 [203/268] Linking static target drivers/librte_mempool_ring.a 00:03:03.118 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:03.118 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.118 [206/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:03.118 [207/268] Linking static target lib/librte_hash.a 00:03:03.118 [208/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.118 [209/268] Linking static target drivers/librte_bus_vdev.a 00:03:03.118 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.118 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:03.118 [212/268] Linking static target drivers/librte_bus_pci.a 00:03:03.377 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.377 [214/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:03.377 [215/268] Linking static target lib/librte_cryptodev.a 00:03:03.377 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.635 [217/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.635 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.635 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:03.635 [220/268] Linking static target lib/librte_ethdev.a 00:03:03.635 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.635 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.635 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.894 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:03.894 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.894 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.152 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.719 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:04.719 [229/268] Linking static target lib/librte_vhost.a 00:03:05.286 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.659 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.927 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.493 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.493 [234/268] Linking target lib/librte_eal.so.24.1 00:03:12.752 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:12.752 [236/268] Linking target lib/librte_ring.so.24.1 00:03:12.752 [237/268] Linking target lib/librte_meter.so.24.1 00:03:12.752 [238/268] Linking target lib/librte_timer.so.24.1 00:03:12.752 [239/268] Linking target lib/librte_pci.so.24.1 00:03:12.752 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:12.752 [241/268] Linking target lib/librte_dmadev.so.24.1 00:03:12.752 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:12.752 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:12.752 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:12.752 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:12.752 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:13.010 [247/268] Linking target lib/librte_rcu.so.24.1 00:03:13.010 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:13.010 [249/268] Linking target lib/librte_mempool.so.24.1 00:03:13.010 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:13.010 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:13.010 [252/268] Linking target lib/librte_mbuf.so.24.1 00:03:13.010 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:13.268 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:13.268 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:13.268 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:13.268 [257/268] Linking target lib/librte_net.so.24.1 00:03:13.268 [258/268] Linking target lib/librte_compressdev.so.24.1 00:03:13.526 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:13.526 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:13.526 [261/268] Linking target lib/librte_cmdline.so.24.1 00:03:13.526 [262/268] Linking target lib/librte_security.so.24.1 00:03:13.526 [263/268] Linking target lib/librte_hash.so.24.1 00:03:13.526 [264/268] Linking target lib/librte_ethdev.so.24.1 00:03:13.526 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:13.526 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:13.526 [267/268] Linking target lib/librte_power.so.24.1 00:03:13.785 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:13.785 INFO: autodetecting backend as ninja 00:03:13.785 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp -j 96 00:03:25.994 CC lib/ut/ut.o 00:03:25.994 CC lib/log/log.o 00:03:25.994 CC lib/log/log_flags.o 00:03:25.994 CC lib/log/log_deprecated.o 00:03:25.994 CC lib/ut_mock/mock.o 00:03:25.994 LIB libspdk_ut.a 00:03:25.994 LIB libspdk_ut_mock.a 00:03:25.994 LIB libspdk_log.a 00:03:25.994 SO libspdk_ut_mock.so.6.0 00:03:25.994 SO libspdk_ut.so.2.0 00:03:25.994 SO libspdk_log.so.7.1 00:03:25.994 SYMLINK libspdk_ut_mock.so 00:03:25.994 SYMLINK libspdk_ut.so 00:03:25.994 SYMLINK libspdk_log.so 00:03:25.994 CC lib/dma/dma.o 00:03:25.994 CC lib/ioat/ioat.o 00:03:25.994 CXX lib/trace_parser/trace.o 00:03:25.994 CC lib/util/base64.o 00:03:25.994 CC lib/util/bit_array.o 00:03:25.994 CC lib/util/cpuset.o 00:03:25.994 CC lib/util/crc16.o 00:03:25.994 CC lib/util/crc32.o 00:03:25.994 CC lib/util/crc32c.o 00:03:25.994 CC lib/util/crc32_ieee.o 00:03:25.994 CC lib/util/crc64.o 00:03:25.994 CC lib/util/dif.o 00:03:25.994 CC lib/util/fd.o 00:03:25.994 CC lib/util/fd_group.o 00:03:25.994 CC lib/util/file.o 00:03:25.994 CC lib/util/hexlify.o 00:03:25.994 CC lib/util/iov.o 00:03:25.994 CC lib/util/math.o 00:03:25.994 CC lib/util/net.o 00:03:25.994 CC lib/util/pipe.o 00:03:25.994 CC lib/util/strerror_tls.o 00:03:25.995 CC lib/util/string.o 00:03:25.995 CC lib/util/uuid.o 00:03:25.995 CC lib/util/xor.o 00:03:25.995 CC lib/util/zipf.o 00:03:25.995 CC lib/util/md5.o 00:03:25.995 CC lib/vfio_user/host/vfio_user_pci.o 00:03:25.995 CC lib/vfio_user/host/vfio_user.o 00:03:25.995 LIB libspdk_dma.a 00:03:25.995 SO libspdk_dma.so.5.0 00:03:25.995 LIB libspdk_ioat.a 00:03:25.995 SYMLINK libspdk_dma.so 00:03:25.995 SO libspdk_ioat.so.7.0 00:03:25.995 SYMLINK libspdk_ioat.so 00:03:25.995 LIB libspdk_vfio_user.a 00:03:25.995 SO libspdk_vfio_user.so.5.0 00:03:25.995 LIB libspdk_util.a 00:03:25.995 SYMLINK libspdk_vfio_user.so 00:03:25.995 SO libspdk_util.so.10.1 00:03:25.995 SYMLINK libspdk_util.so 00:03:25.995 LIB libspdk_trace_parser.a 00:03:25.995 SO libspdk_trace_parser.so.6.0 00:03:25.995 SYMLINK libspdk_trace_parser.so 00:03:25.995 CC lib/conf/conf.o 00:03:25.995 CC lib/json/json_parse.o 00:03:25.995 CC lib/json/json_util.o 00:03:25.995 CC lib/json/json_write.o 00:03:25.995 CC lib/vmd/vmd.o 00:03:25.995 CC lib/idxd/idxd.o 00:03:25.995 CC lib/vmd/led.o 00:03:25.995 CC lib/idxd/idxd_user.o 00:03:25.995 CC lib/idxd/idxd_kernel.o 00:03:25.995 CC lib/rdma_utils/rdma_utils.o 00:03:25.995 CC lib/env_dpdk/env.o 00:03:25.995 CC lib/env_dpdk/memory.o 00:03:25.995 CC lib/env_dpdk/pci.o 00:03:25.995 CC lib/env_dpdk/init.o 00:03:25.995 CC lib/env_dpdk/threads.o 00:03:25.995 CC lib/env_dpdk/pci_ioat.o 00:03:25.995 CC lib/env_dpdk/pci_virtio.o 00:03:25.995 CC lib/env_dpdk/pci_vmd.o 00:03:25.995 CC lib/env_dpdk/pci_idxd.o 00:03:25.995 CC lib/env_dpdk/pci_event.o 00:03:25.995 CC lib/env_dpdk/sigbus_handler.o 00:03:25.995 CC lib/env_dpdk/pci_dpdk.o 00:03:25.995 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:25.995 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:25.995 LIB libspdk_conf.a 00:03:25.995 SO libspdk_conf.so.6.0 00:03:25.995 LIB libspdk_json.a 00:03:25.995 LIB libspdk_rdma_utils.a 00:03:25.995 SO libspdk_rdma_utils.so.1.0 00:03:25.995 SO libspdk_json.so.6.0 00:03:25.995 SYMLINK libspdk_conf.so 00:03:25.995 SYMLINK libspdk_rdma_utils.so 00:03:25.995 SYMLINK libspdk_json.so 00:03:25.995 LIB libspdk_idxd.a 00:03:25.995 LIB libspdk_vmd.a 00:03:25.995 SO libspdk_vmd.so.6.0 00:03:25.995 SO libspdk_idxd.so.12.1 00:03:26.252 SYMLINK libspdk_vmd.so 00:03:26.252 SYMLINK libspdk_idxd.so 00:03:26.252 CC lib/jsonrpc/jsonrpc_server.o 00:03:26.252 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:26.252 CC lib/jsonrpc/jsonrpc_client.o 00:03:26.252 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:26.252 CC lib/rdma_provider/common.o 00:03:26.252 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:26.510 LIB libspdk_rdma_provider.a 00:03:26.510 LIB libspdk_jsonrpc.a 00:03:26.510 SO libspdk_rdma_provider.so.7.0 00:03:26.510 SO libspdk_jsonrpc.so.6.0 00:03:26.510 SYMLINK libspdk_rdma_provider.so 00:03:26.510 SYMLINK libspdk_jsonrpc.so 00:03:26.510 LIB libspdk_env_dpdk.a 00:03:26.769 SO libspdk_env_dpdk.so.15.1 00:03:26.769 SYMLINK libspdk_env_dpdk.so 00:03:26.769 CC lib/rpc/rpc.o 00:03:27.028 LIB libspdk_rpc.a 00:03:27.028 SO libspdk_rpc.so.6.0 00:03:27.028 SYMLINK libspdk_rpc.so 00:03:27.596 CC lib/notify/notify.o 00:03:27.596 CC lib/notify/notify_rpc.o 00:03:27.596 CC lib/trace/trace.o 00:03:27.596 CC lib/trace/trace_flags.o 00:03:27.596 CC lib/trace/trace_rpc.o 00:03:27.596 CC lib/keyring/keyring.o 00:03:27.596 CC lib/keyring/keyring_rpc.o 00:03:27.596 LIB libspdk_notify.a 00:03:27.596 SO libspdk_notify.so.6.0 00:03:27.596 LIB libspdk_keyring.a 00:03:27.596 LIB libspdk_trace.a 00:03:27.596 SYMLINK libspdk_notify.so 00:03:27.596 SO libspdk_keyring.so.2.0 00:03:27.596 SO libspdk_trace.so.11.0 00:03:27.855 SYMLINK libspdk_keyring.so 00:03:27.855 SYMLINK libspdk_trace.so 00:03:28.113 CC lib/thread/thread.o 00:03:28.113 CC lib/thread/iobuf.o 00:03:28.113 CC lib/sock/sock.o 00:03:28.113 CC lib/sock/sock_rpc.o 00:03:28.372 LIB libspdk_sock.a 00:03:28.372 SO libspdk_sock.so.10.0 00:03:28.372 SYMLINK libspdk_sock.so 00:03:28.938 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:28.938 CC lib/nvme/nvme_ctrlr.o 00:03:28.938 CC lib/nvme/nvme_fabric.o 00:03:28.938 CC lib/nvme/nvme_ns_cmd.o 00:03:28.939 CC lib/nvme/nvme_ns.o 00:03:28.939 CC lib/nvme/nvme_pcie_common.o 00:03:28.939 CC lib/nvme/nvme_pcie.o 00:03:28.939 CC lib/nvme/nvme_qpair.o 00:03:28.939 CC lib/nvme/nvme.o 00:03:28.939 CC lib/nvme/nvme_quirks.o 00:03:28.939 CC lib/nvme/nvme_transport.o 00:03:28.939 CC lib/nvme/nvme_discovery.o 00:03:28.939 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:28.939 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:28.939 CC lib/nvme/nvme_tcp.o 00:03:28.939 CC lib/nvme/nvme_opal.o 00:03:28.939 CC lib/nvme/nvme_io_msg.o 00:03:28.939 CC lib/nvme/nvme_poll_group.o 00:03:28.939 CC lib/nvme/nvme_zns.o 00:03:28.939 CC lib/nvme/nvme_stubs.o 00:03:28.939 CC lib/nvme/nvme_auth.o 00:03:28.939 CC lib/nvme/nvme_cuse.o 00:03:28.939 CC lib/nvme/nvme_vfio_user.o 00:03:28.939 CC lib/nvme/nvme_rdma.o 00:03:29.197 LIB libspdk_thread.a 00:03:29.197 SO libspdk_thread.so.11.0 00:03:29.197 SYMLINK libspdk_thread.so 00:03:29.455 CC lib/fsdev/fsdev.o 00:03:29.455 CC lib/virtio/virtio.o 00:03:29.455 CC lib/fsdev/fsdev_io.o 00:03:29.455 CC lib/fsdev/fsdev_rpc.o 00:03:29.455 CC lib/virtio/virtio_vhost_user.o 00:03:29.455 CC lib/accel/accel.o 00:03:29.455 CC lib/accel/accel_sw.o 00:03:29.455 CC lib/accel/accel_rpc.o 00:03:29.455 CC lib/virtio/virtio_vfio_user.o 00:03:29.455 CC lib/virtio/virtio_pci.o 00:03:29.455 CC lib/blob/blobstore.o 00:03:29.455 CC lib/blob/request.o 00:03:29.455 CC lib/blob/blob_bs_dev.o 00:03:29.455 CC lib/blob/zeroes.o 00:03:29.455 CC lib/init/json_config.o 00:03:29.455 CC lib/init/subsystem.o 00:03:29.455 CC lib/init/subsystem_rpc.o 00:03:29.455 CC lib/init/rpc.o 00:03:29.455 CC lib/vfu_tgt/tgt_endpoint.o 00:03:29.455 CC lib/vfu_tgt/tgt_rpc.o 00:03:29.714 LIB libspdk_init.a 00:03:29.714 SO libspdk_init.so.6.0 00:03:29.973 LIB libspdk_virtio.a 00:03:29.973 LIB libspdk_vfu_tgt.a 00:03:29.973 SYMLINK libspdk_init.so 00:03:29.973 SO libspdk_vfu_tgt.so.3.0 00:03:29.973 SO libspdk_virtio.so.7.0 00:03:29.973 SYMLINK libspdk_vfu_tgt.so 00:03:29.973 SYMLINK libspdk_virtio.so 00:03:29.973 LIB libspdk_fsdev.a 00:03:30.231 SO libspdk_fsdev.so.2.0 00:03:30.231 CC lib/event/app.o 00:03:30.231 CC lib/event/reactor.o 00:03:30.231 CC lib/event/log_rpc.o 00:03:30.231 CC lib/event/app_rpc.o 00:03:30.231 CC lib/event/scheduler_static.o 00:03:30.231 SYMLINK libspdk_fsdev.so 00:03:30.490 LIB libspdk_accel.a 00:03:30.490 SO libspdk_accel.so.16.0 00:03:30.490 LIB libspdk_nvme.a 00:03:30.490 SYMLINK libspdk_accel.so 00:03:30.490 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:30.490 LIB libspdk_event.a 00:03:30.490 SO libspdk_nvme.so.15.0 00:03:30.490 SO libspdk_event.so.14.0 00:03:30.749 SYMLINK libspdk_event.so 00:03:30.749 SYMLINK libspdk_nvme.so 00:03:30.749 CC lib/bdev/bdev.o 00:03:30.749 CC lib/bdev/bdev_rpc.o 00:03:30.749 CC lib/bdev/bdev_zone.o 00:03:30.749 CC lib/bdev/part.o 00:03:30.749 CC lib/bdev/scsi_nvme.o 00:03:31.008 LIB libspdk_fuse_dispatcher.a 00:03:31.008 SO libspdk_fuse_dispatcher.so.1.0 00:03:31.008 SYMLINK libspdk_fuse_dispatcher.so 00:03:31.944 LIB libspdk_blob.a 00:03:31.944 SO libspdk_blob.so.12.0 00:03:31.944 SYMLINK libspdk_blob.so 00:03:32.203 CC lib/lvol/lvol.o 00:03:32.203 CC lib/blobfs/blobfs.o 00:03:32.203 CC lib/blobfs/tree.o 00:03:32.771 LIB libspdk_bdev.a 00:03:32.771 LIB libspdk_blobfs.a 00:03:32.771 SO libspdk_bdev.so.17.0 00:03:32.771 SO libspdk_blobfs.so.11.0 00:03:32.771 LIB libspdk_lvol.a 00:03:32.771 SYMLINK libspdk_blobfs.so 00:03:32.771 SYMLINK libspdk_bdev.so 00:03:32.771 SO libspdk_lvol.so.11.0 00:03:33.030 SYMLINK libspdk_lvol.so 00:03:33.289 CC lib/nvmf/ctrlr.o 00:03:33.289 CC lib/nvmf/ctrlr_discovery.o 00:03:33.289 CC lib/nvmf/ctrlr_bdev.o 00:03:33.289 CC lib/nvmf/subsystem.o 00:03:33.289 CC lib/ftl/ftl_core.o 00:03:33.289 CC lib/nvmf/nvmf.o 00:03:33.289 CC lib/ftl/ftl_init.o 00:03:33.289 CC lib/nvmf/transport.o 00:03:33.289 CC lib/nvmf/nvmf_rpc.o 00:03:33.289 CC lib/ftl/ftl_layout.o 00:03:33.289 CC lib/ftl/ftl_debug.o 00:03:33.289 CC lib/nvmf/tcp.o 00:03:33.290 CC lib/nvmf/stubs.o 00:03:33.290 CC lib/ftl/ftl_io.o 00:03:33.290 CC lib/nvmf/mdns_server.o 00:03:33.290 CC lib/ftl/ftl_sb.o 00:03:33.290 CC lib/nvmf/vfio_user.o 00:03:33.290 CC lib/ublk/ublk.o 00:03:33.290 CC lib/nvmf/rdma.o 00:03:33.290 CC lib/ftl/ftl_l2p_flat.o 00:03:33.290 CC lib/ftl/ftl_l2p.o 00:03:33.290 CC lib/ublk/ublk_rpc.o 00:03:33.290 CC lib/nvmf/auth.o 00:03:33.290 CC lib/nbd/nbd.o 00:03:33.290 CC lib/ftl/ftl_nv_cache.o 00:03:33.290 CC lib/scsi/dev.o 00:03:33.290 CC lib/scsi/port.o 00:03:33.290 CC lib/nbd/nbd_rpc.o 00:03:33.290 CC lib/ftl/ftl_band.o 00:03:33.290 CC lib/scsi/lun.o 00:03:33.290 CC lib/ftl/ftl_band_ops.o 00:03:33.290 CC lib/scsi/scsi.o 00:03:33.290 CC lib/ftl/ftl_writer.o 00:03:33.290 CC lib/ftl/ftl_rq.o 00:03:33.290 CC lib/scsi/scsi_bdev.o 00:03:33.290 CC lib/ftl/ftl_reloc.o 00:03:33.290 CC lib/scsi/scsi_pr.o 00:03:33.290 CC lib/scsi/scsi_rpc.o 00:03:33.290 CC lib/ftl/ftl_p2l.o 00:03:33.290 CC lib/ftl/ftl_l2p_cache.o 00:03:33.290 CC lib/scsi/task.o 00:03:33.290 CC lib/ftl/ftl_p2l_log.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:33.290 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:33.290 CC lib/ftl/utils/ftl_conf.o 00:03:33.290 CC lib/ftl/utils/ftl_md.o 00:03:33.290 CC lib/ftl/utils/ftl_bitmap.o 00:03:33.290 CC lib/ftl/utils/ftl_mempool.o 00:03:33.290 CC lib/ftl/utils/ftl_property.o 00:03:33.290 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:33.290 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:33.290 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:33.290 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:33.290 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:33.290 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:33.290 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:33.290 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:33.290 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:33.290 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:33.290 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:33.290 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:33.290 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:33.290 CC lib/ftl/base/ftl_base_dev.o 00:03:33.290 CC lib/ftl/base/ftl_base_bdev.o 00:03:33.290 CC lib/ftl/ftl_trace.o 00:03:33.872 LIB libspdk_nbd.a 00:03:33.872 LIB libspdk_scsi.a 00:03:33.872 SO libspdk_nbd.so.7.0 00:03:33.872 SO libspdk_scsi.so.9.0 00:03:33.872 SYMLINK libspdk_nbd.so 00:03:33.872 LIB libspdk_ublk.a 00:03:33.872 SYMLINK libspdk_scsi.so 00:03:33.872 SO libspdk_ublk.so.3.0 00:03:34.130 SYMLINK libspdk_ublk.so 00:03:34.130 LIB libspdk_ftl.a 00:03:34.130 CC lib/iscsi/conn.o 00:03:34.130 CC lib/iscsi/param.o 00:03:34.130 CC lib/iscsi/init_grp.o 00:03:34.130 CC lib/iscsi/iscsi.o 00:03:34.130 CC lib/iscsi/portal_grp.o 00:03:34.130 CC lib/iscsi/tgt_node.o 00:03:34.130 CC lib/iscsi/iscsi_subsystem.o 00:03:34.130 CC lib/vhost/vhost.o 00:03:34.130 CC lib/iscsi/iscsi_rpc.o 00:03:34.130 CC lib/vhost/vhost_rpc.o 00:03:34.130 CC lib/iscsi/task.o 00:03:34.130 CC lib/vhost/vhost_scsi.o 00:03:34.130 CC lib/vhost/vhost_blk.o 00:03:34.130 CC lib/vhost/rte_vhost_user.o 00:03:34.388 SO libspdk_ftl.so.9.0 00:03:34.388 SYMLINK libspdk_ftl.so 00:03:34.955 LIB libspdk_nvmf.a 00:03:34.955 LIB libspdk_vhost.a 00:03:34.955 SO libspdk_nvmf.so.20.0 00:03:35.214 SO libspdk_vhost.so.8.0 00:03:35.214 SYMLINK libspdk_vhost.so 00:03:35.214 SYMLINK libspdk_nvmf.so 00:03:35.214 LIB libspdk_iscsi.a 00:03:35.214 SO libspdk_iscsi.so.8.0 00:03:35.474 SYMLINK libspdk_iscsi.so 00:03:36.042 CC module/vfu_device/vfu_virtio.o 00:03:36.042 CC module/vfu_device/vfu_virtio_blk.o 00:03:36.042 CC module/vfu_device/vfu_virtio_scsi.o 00:03:36.042 CC module/vfu_device/vfu_virtio_rpc.o 00:03:36.042 CC module/env_dpdk/env_dpdk_rpc.o 00:03:36.042 CC module/vfu_device/vfu_virtio_fs.o 00:03:36.042 CC module/blob/bdev/blob_bdev.o 00:03:36.042 CC module/accel/iaa/accel_iaa.o 00:03:36.042 CC module/accel/error/accel_error.o 00:03:36.042 CC module/scheduler/gscheduler/gscheduler.o 00:03:36.042 CC module/accel/iaa/accel_iaa_rpc.o 00:03:36.042 CC module/keyring/linux/keyring.o 00:03:36.042 CC module/accel/error/accel_error_rpc.o 00:03:36.042 CC module/keyring/linux/keyring_rpc.o 00:03:36.042 CC module/accel/ioat/accel_ioat.o 00:03:36.042 CC module/accel/dsa/accel_dsa.o 00:03:36.042 CC module/fsdev/aio/fsdev_aio.o 00:03:36.042 CC module/accel/ioat/accel_ioat_rpc.o 00:03:36.042 CC module/accel/dsa/accel_dsa_rpc.o 00:03:36.042 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:36.042 CC module/fsdev/aio/linux_aio_mgr.o 00:03:36.042 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:36.042 CC module/keyring/file/keyring.o 00:03:36.042 CC module/keyring/file/keyring_rpc.o 00:03:36.042 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:36.042 CC module/sock/posix/posix.o 00:03:36.042 LIB libspdk_env_dpdk_rpc.a 00:03:36.042 SO libspdk_env_dpdk_rpc.so.6.0 00:03:36.300 SYMLINK libspdk_env_dpdk_rpc.so 00:03:36.300 LIB libspdk_keyring_linux.a 00:03:36.300 LIB libspdk_scheduler_gscheduler.a 00:03:36.300 LIB libspdk_keyring_file.a 00:03:36.300 LIB libspdk_scheduler_dpdk_governor.a 00:03:36.300 SO libspdk_keyring_linux.so.1.0 00:03:36.300 SO libspdk_scheduler_gscheduler.so.4.0 00:03:36.300 LIB libspdk_accel_ioat.a 00:03:36.300 SO libspdk_keyring_file.so.2.0 00:03:36.300 LIB libspdk_accel_error.a 00:03:36.300 LIB libspdk_scheduler_dynamic.a 00:03:36.300 LIB libspdk_accel_iaa.a 00:03:36.300 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:36.300 SO libspdk_accel_ioat.so.6.0 00:03:36.300 SO libspdk_accel_error.so.2.0 00:03:36.300 SYMLINK libspdk_scheduler_gscheduler.so 00:03:36.300 SO libspdk_scheduler_dynamic.so.4.0 00:03:36.300 SO libspdk_accel_iaa.so.3.0 00:03:36.300 SYMLINK libspdk_keyring_linux.so 00:03:36.300 SYMLINK libspdk_keyring_file.so 00:03:36.300 LIB libspdk_accel_dsa.a 00:03:36.300 LIB libspdk_blob_bdev.a 00:03:36.300 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:36.300 SYMLINK libspdk_accel_ioat.so 00:03:36.300 SYMLINK libspdk_accel_error.so 00:03:36.300 SO libspdk_accel_dsa.so.5.0 00:03:36.300 SO libspdk_blob_bdev.so.12.0 00:03:36.300 SYMLINK libspdk_accel_iaa.so 00:03:36.300 SYMLINK libspdk_scheduler_dynamic.so 00:03:36.300 SYMLINK libspdk_accel_dsa.so 00:03:36.300 SYMLINK libspdk_blob_bdev.so 00:03:36.559 LIB libspdk_vfu_device.a 00:03:36.559 SO libspdk_vfu_device.so.3.0 00:03:36.559 SYMLINK libspdk_vfu_device.so 00:03:36.559 LIB libspdk_fsdev_aio.a 00:03:36.559 SO libspdk_fsdev_aio.so.1.0 00:03:36.559 LIB libspdk_sock_posix.a 00:03:36.817 SO libspdk_sock_posix.so.6.0 00:03:36.817 SYMLINK libspdk_fsdev_aio.so 00:03:36.817 SYMLINK libspdk_sock_posix.so 00:03:36.817 CC module/bdev/error/vbdev_error_rpc.o 00:03:36.817 CC module/bdev/error/vbdev_error.o 00:03:36.817 CC module/bdev/gpt/gpt.o 00:03:36.817 CC module/bdev/gpt/vbdev_gpt.o 00:03:36.817 CC module/bdev/split/vbdev_split_rpc.o 00:03:36.817 CC module/bdev/split/vbdev_split.o 00:03:36.817 CC module/bdev/null/bdev_null.o 00:03:36.817 CC module/bdev/null/bdev_null_rpc.o 00:03:36.817 CC module/bdev/lvol/vbdev_lvol.o 00:03:36.817 CC module/bdev/delay/vbdev_delay.o 00:03:36.817 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:36.817 CC module/bdev/malloc/bdev_malloc.o 00:03:36.817 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:36.817 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:36.817 CC module/bdev/nvme/bdev_nvme.o 00:03:36.817 CC module/bdev/raid/bdev_raid.o 00:03:36.817 CC module/bdev/raid/raid0.o 00:03:36.817 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:36.817 CC module/bdev/raid/bdev_raid_sb.o 00:03:36.817 CC module/bdev/raid/bdev_raid_rpc.o 00:03:36.817 CC module/bdev/aio/bdev_aio.o 00:03:36.817 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:36.817 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:36.817 CC module/bdev/nvme/nvme_rpc.o 00:03:36.817 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:36.817 CC module/bdev/aio/bdev_aio_rpc.o 00:03:36.817 CC module/bdev/nvme/bdev_mdns_client.o 00:03:36.817 CC module/bdev/raid/raid1.o 00:03:36.817 CC module/bdev/nvme/vbdev_opal.o 00:03:36.817 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:36.817 CC module/bdev/passthru/vbdev_passthru.o 00:03:36.817 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:36.817 CC module/bdev/raid/concat.o 00:03:36.817 CC module/bdev/iscsi/bdev_iscsi.o 00:03:36.817 CC module/blobfs/bdev/blobfs_bdev.o 00:03:36.817 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:36.817 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:36.817 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:36.818 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:36.818 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:36.818 CC module/bdev/ftl/bdev_ftl.o 00:03:36.818 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:37.076 LIB libspdk_blobfs_bdev.a 00:03:37.076 LIB libspdk_bdev_gpt.a 00:03:37.076 LIB libspdk_bdev_error.a 00:03:37.076 LIB libspdk_bdev_split.a 00:03:37.076 SO libspdk_blobfs_bdev.so.6.0 00:03:37.076 SO libspdk_bdev_gpt.so.6.0 00:03:37.335 SO libspdk_bdev_error.so.6.0 00:03:37.335 LIB libspdk_bdev_passthru.a 00:03:37.335 LIB libspdk_bdev_null.a 00:03:37.335 LIB libspdk_bdev_ftl.a 00:03:37.335 SO libspdk_bdev_split.so.6.0 00:03:37.335 SO libspdk_bdev_passthru.so.6.0 00:03:37.335 SO libspdk_bdev_ftl.so.6.0 00:03:37.335 SYMLINK libspdk_blobfs_bdev.so 00:03:37.335 SYMLINK libspdk_bdev_gpt.so 00:03:37.335 SO libspdk_bdev_null.so.6.0 00:03:37.335 LIB libspdk_bdev_zone_block.a 00:03:37.335 LIB libspdk_bdev_delay.a 00:03:37.335 SYMLINK libspdk_bdev_error.so 00:03:37.335 LIB libspdk_bdev_aio.a 00:03:37.335 SYMLINK libspdk_bdev_split.so 00:03:37.335 SO libspdk_bdev_delay.so.6.0 00:03:37.335 SO libspdk_bdev_zone_block.so.6.0 00:03:37.335 SYMLINK libspdk_bdev_passthru.so 00:03:37.335 SYMLINK libspdk_bdev_ftl.so 00:03:37.335 SO libspdk_bdev_aio.so.6.0 00:03:37.335 LIB libspdk_bdev_malloc.a 00:03:37.335 LIB libspdk_bdev_iscsi.a 00:03:37.335 SYMLINK libspdk_bdev_null.so 00:03:37.335 LIB libspdk_bdev_lvol.a 00:03:37.335 SO libspdk_bdev_iscsi.so.6.0 00:03:37.335 SO libspdk_bdev_malloc.so.6.0 00:03:37.335 SYMLINK libspdk_bdev_delay.so 00:03:37.335 SYMLINK libspdk_bdev_zone_block.so 00:03:37.335 SO libspdk_bdev_lvol.so.6.0 00:03:37.335 SYMLINK libspdk_bdev_aio.so 00:03:37.335 LIB libspdk_bdev_virtio.a 00:03:37.335 SYMLINK libspdk_bdev_iscsi.so 00:03:37.335 SYMLINK libspdk_bdev_malloc.so 00:03:37.335 SYMLINK libspdk_bdev_lvol.so 00:03:37.335 SO libspdk_bdev_virtio.so.6.0 00:03:37.595 SYMLINK libspdk_bdev_virtio.so 00:03:37.595 LIB libspdk_bdev_raid.a 00:03:37.855 SO libspdk_bdev_raid.so.6.0 00:03:37.855 SYMLINK libspdk_bdev_raid.so 00:03:38.792 LIB libspdk_bdev_nvme.a 00:03:38.792 SO libspdk_bdev_nvme.so.7.1 00:03:39.051 SYMLINK libspdk_bdev_nvme.so 00:03:39.619 CC module/event/subsystems/keyring/keyring.o 00:03:39.619 CC module/event/subsystems/iobuf/iobuf.o 00:03:39.619 CC module/event/subsystems/vmd/vmd.o 00:03:39.619 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:39.619 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:39.619 CC module/event/subsystems/sock/sock.o 00:03:39.619 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:39.619 CC module/event/subsystems/scheduler/scheduler.o 00:03:39.619 CC module/event/subsystems/fsdev/fsdev.o 00:03:39.619 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:39.878 LIB libspdk_event_keyring.a 00:03:39.878 LIB libspdk_event_scheduler.a 00:03:39.878 LIB libspdk_event_vmd.a 00:03:39.878 LIB libspdk_event_fsdev.a 00:03:39.878 LIB libspdk_event_sock.a 00:03:39.878 LIB libspdk_event_iobuf.a 00:03:39.878 LIB libspdk_event_vfu_tgt.a 00:03:39.878 SO libspdk_event_keyring.so.1.0 00:03:39.878 LIB libspdk_event_vhost_blk.a 00:03:39.878 SO libspdk_event_scheduler.so.4.0 00:03:39.878 SO libspdk_event_vmd.so.6.0 00:03:39.878 SO libspdk_event_sock.so.5.0 00:03:39.878 SO libspdk_event_fsdev.so.1.0 00:03:39.878 SO libspdk_event_vfu_tgt.so.3.0 00:03:39.878 SO libspdk_event_iobuf.so.3.0 00:03:39.878 SO libspdk_event_vhost_blk.so.3.0 00:03:39.878 SYMLINK libspdk_event_keyring.so 00:03:39.878 SYMLINK libspdk_event_scheduler.so 00:03:39.878 SYMLINK libspdk_event_sock.so 00:03:39.878 SYMLINK libspdk_event_fsdev.so 00:03:39.878 SYMLINK libspdk_event_vmd.so 00:03:39.878 SYMLINK libspdk_event_vfu_tgt.so 00:03:39.878 SYMLINK libspdk_event_iobuf.so 00:03:39.878 SYMLINK libspdk_event_vhost_blk.so 00:03:40.137 CC module/event/subsystems/accel/accel.o 00:03:40.397 LIB libspdk_event_accel.a 00:03:40.397 SO libspdk_event_accel.so.6.0 00:03:40.397 SYMLINK libspdk_event_accel.so 00:03:40.657 CC module/event/subsystems/bdev/bdev.o 00:03:40.916 LIB libspdk_event_bdev.a 00:03:40.916 SO libspdk_event_bdev.so.6.0 00:03:40.916 SYMLINK libspdk_event_bdev.so 00:03:41.484 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:41.484 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:41.484 CC module/event/subsystems/ublk/ublk.o 00:03:41.484 CC module/event/subsystems/nbd/nbd.o 00:03:41.484 CC module/event/subsystems/scsi/scsi.o 00:03:41.484 LIB libspdk_event_nbd.a 00:03:41.484 LIB libspdk_event_ublk.a 00:03:41.484 SO libspdk_event_nbd.so.6.0 00:03:41.484 LIB libspdk_event_scsi.a 00:03:41.484 SO libspdk_event_ublk.so.3.0 00:03:41.484 SO libspdk_event_scsi.so.6.0 00:03:41.484 LIB libspdk_event_nvmf.a 00:03:41.484 SYMLINK libspdk_event_nbd.so 00:03:41.484 SYMLINK libspdk_event_ublk.so 00:03:41.484 SO libspdk_event_nvmf.so.6.0 00:03:41.484 SYMLINK libspdk_event_scsi.so 00:03:41.744 SYMLINK libspdk_event_nvmf.so 00:03:42.003 CC module/event/subsystems/iscsi/iscsi.o 00:03:42.003 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:42.003 LIB libspdk_event_vhost_scsi.a 00:03:42.003 LIB libspdk_event_iscsi.a 00:03:42.003 SO libspdk_event_vhost_scsi.so.3.0 00:03:42.003 SO libspdk_event_iscsi.so.6.0 00:03:42.003 SYMLINK libspdk_event_vhost_scsi.so 00:03:42.262 SYMLINK libspdk_event_iscsi.so 00:03:42.262 SO libspdk.so.6.0 00:03:42.262 SYMLINK libspdk.so 00:03:42.839 CXX app/trace/trace.o 00:03:42.839 CC app/trace_record/trace_record.o 00:03:42.839 CC app/spdk_top/spdk_top.o 00:03:42.839 CC app/spdk_nvme_perf/perf.o 00:03:42.839 CC test/rpc_client/rpc_client_test.o 00:03:42.839 CC app/spdk_nvme_discover/discovery_aer.o 00:03:42.839 TEST_HEADER include/spdk/accel.h 00:03:42.839 TEST_HEADER include/spdk/accel_module.h 00:03:42.839 TEST_HEADER include/spdk/assert.h 00:03:42.839 TEST_HEADER include/spdk/base64.h 00:03:42.839 TEST_HEADER include/spdk/barrier.h 00:03:42.839 TEST_HEADER include/spdk/bdev.h 00:03:42.839 TEST_HEADER include/spdk/bdev_module.h 00:03:42.839 CC app/spdk_nvme_identify/identify.o 00:03:42.839 TEST_HEADER include/spdk/bdev_zone.h 00:03:42.839 TEST_HEADER include/spdk/bit_array.h 00:03:42.839 TEST_HEADER include/spdk/bit_pool.h 00:03:42.839 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:42.839 TEST_HEADER include/spdk/blob_bdev.h 00:03:42.839 TEST_HEADER include/spdk/blob.h 00:03:42.839 TEST_HEADER include/spdk/conf.h 00:03:42.839 TEST_HEADER include/spdk/blobfs.h 00:03:42.839 TEST_HEADER include/spdk/config.h 00:03:42.839 TEST_HEADER include/spdk/cpuset.h 00:03:42.839 TEST_HEADER include/spdk/crc16.h 00:03:42.839 TEST_HEADER include/spdk/crc32.h 00:03:42.839 TEST_HEADER include/spdk/dif.h 00:03:42.839 CC app/spdk_lspci/spdk_lspci.o 00:03:42.839 TEST_HEADER include/spdk/dma.h 00:03:42.839 TEST_HEADER include/spdk/crc64.h 00:03:42.839 TEST_HEADER include/spdk/endian.h 00:03:42.839 TEST_HEADER include/spdk/env_dpdk.h 00:03:42.839 TEST_HEADER include/spdk/event.h 00:03:42.839 TEST_HEADER include/spdk/fd_group.h 00:03:42.839 TEST_HEADER include/spdk/env.h 00:03:42.839 TEST_HEADER include/spdk/file.h 00:03:42.839 TEST_HEADER include/spdk/fsdev.h 00:03:42.839 TEST_HEADER include/spdk/fd.h 00:03:42.839 TEST_HEADER include/spdk/ftl.h 00:03:42.839 TEST_HEADER include/spdk/fsdev_module.h 00:03:42.839 TEST_HEADER include/spdk/hexlify.h 00:03:42.839 TEST_HEADER include/spdk/gpt_spec.h 00:03:42.839 TEST_HEADER include/spdk/histogram_data.h 00:03:42.839 TEST_HEADER include/spdk/idxd_spec.h 00:03:42.839 TEST_HEADER include/spdk/idxd.h 00:03:42.839 TEST_HEADER include/spdk/ioat_spec.h 00:03:42.839 TEST_HEADER include/spdk/init.h 00:03:42.839 TEST_HEADER include/spdk/ioat.h 00:03:42.839 TEST_HEADER include/spdk/json.h 00:03:42.839 TEST_HEADER include/spdk/jsonrpc.h 00:03:42.839 TEST_HEADER include/spdk/keyring.h 00:03:42.839 TEST_HEADER include/spdk/iscsi_spec.h 00:03:42.839 TEST_HEADER include/spdk/keyring_module.h 00:03:42.839 TEST_HEADER include/spdk/log.h 00:03:42.839 TEST_HEADER include/spdk/likely.h 00:03:42.839 TEST_HEADER include/spdk/md5.h 00:03:42.839 TEST_HEADER include/spdk/lvol.h 00:03:42.839 TEST_HEADER include/spdk/memory.h 00:03:42.839 TEST_HEADER include/spdk/nbd.h 00:03:42.839 TEST_HEADER include/spdk/notify.h 00:03:42.839 TEST_HEADER include/spdk/mmio.h 00:03:42.839 TEST_HEADER include/spdk/nvme.h 00:03:42.839 TEST_HEADER include/spdk/nvme_intel.h 00:03:42.839 TEST_HEADER include/spdk/net.h 00:03:42.839 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:42.839 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:42.839 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:42.839 CC app/spdk_dd/spdk_dd.o 00:03:42.839 TEST_HEADER include/spdk/nvme_zns.h 00:03:42.839 CC app/nvmf_tgt/nvmf_main.o 00:03:42.839 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:42.839 TEST_HEADER include/spdk/nvme_spec.h 00:03:42.839 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:42.839 TEST_HEADER include/spdk/nvmf_transport.h 00:03:42.839 CC app/iscsi_tgt/iscsi_tgt.o 00:03:42.839 TEST_HEADER include/spdk/nvmf.h 00:03:42.839 TEST_HEADER include/spdk/opal.h 00:03:42.839 TEST_HEADER include/spdk/nvmf_spec.h 00:03:42.839 TEST_HEADER include/spdk/pci_ids.h 00:03:42.839 TEST_HEADER include/spdk/opal_spec.h 00:03:42.839 TEST_HEADER include/spdk/queue.h 00:03:42.839 TEST_HEADER include/spdk/rpc.h 00:03:42.839 TEST_HEADER include/spdk/pipe.h 00:03:42.839 TEST_HEADER include/spdk/reduce.h 00:03:42.839 TEST_HEADER include/spdk/scheduler.h 00:03:42.839 TEST_HEADER include/spdk/scsi.h 00:03:42.839 TEST_HEADER include/spdk/scsi_spec.h 00:03:42.839 TEST_HEADER include/spdk/sock.h 00:03:42.839 TEST_HEADER include/spdk/stdinc.h 00:03:42.839 TEST_HEADER include/spdk/thread.h 00:03:42.839 TEST_HEADER include/spdk/string.h 00:03:42.839 TEST_HEADER include/spdk/trace_parser.h 00:03:42.839 TEST_HEADER include/spdk/tree.h 00:03:42.839 TEST_HEADER include/spdk/trace.h 00:03:42.839 TEST_HEADER include/spdk/ublk.h 00:03:42.839 CC app/spdk_tgt/spdk_tgt.o 00:03:42.839 TEST_HEADER include/spdk/util.h 00:03:42.839 TEST_HEADER include/spdk/uuid.h 00:03:42.839 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:42.839 TEST_HEADER include/spdk/version.h 00:03:42.839 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:42.839 TEST_HEADER include/spdk/vhost.h 00:03:42.839 TEST_HEADER include/spdk/vmd.h 00:03:42.839 TEST_HEADER include/spdk/xor.h 00:03:42.839 CXX test/cpp_headers/assert.o 00:03:42.839 CXX test/cpp_headers/accel.o 00:03:42.839 CXX test/cpp_headers/accel_module.o 00:03:42.839 TEST_HEADER include/spdk/zipf.h 00:03:42.839 CXX test/cpp_headers/barrier.o 00:03:42.839 CXX test/cpp_headers/bdev_module.o 00:03:42.839 CXX test/cpp_headers/base64.o 00:03:42.839 CXX test/cpp_headers/bdev.o 00:03:42.839 CXX test/cpp_headers/bit_pool.o 00:03:42.839 CXX test/cpp_headers/bit_array.o 00:03:42.839 CXX test/cpp_headers/bdev_zone.o 00:03:42.839 CXX test/cpp_headers/blob_bdev.o 00:03:42.839 CXX test/cpp_headers/blobfs_bdev.o 00:03:42.839 CXX test/cpp_headers/blobfs.o 00:03:42.839 CXX test/cpp_headers/blob.o 00:03:42.839 CXX test/cpp_headers/conf.o 00:03:42.839 CXX test/cpp_headers/config.o 00:03:42.839 CXX test/cpp_headers/cpuset.o 00:03:42.839 CXX test/cpp_headers/crc32.o 00:03:42.839 CXX test/cpp_headers/crc16.o 00:03:42.839 CXX test/cpp_headers/dif.o 00:03:42.839 CXX test/cpp_headers/crc64.o 00:03:42.839 CXX test/cpp_headers/endian.o 00:03:42.839 CXX test/cpp_headers/dma.o 00:03:42.839 CXX test/cpp_headers/env_dpdk.o 00:03:42.839 CXX test/cpp_headers/env.o 00:03:42.839 CXX test/cpp_headers/fd_group.o 00:03:42.839 CXX test/cpp_headers/event.o 00:03:42.839 CXX test/cpp_headers/file.o 00:03:42.839 CXX test/cpp_headers/fd.o 00:03:42.839 CXX test/cpp_headers/fsdev_module.o 00:03:42.839 CXX test/cpp_headers/fsdev.o 00:03:42.839 CXX test/cpp_headers/ftl.o 00:03:42.839 CXX test/cpp_headers/gpt_spec.o 00:03:42.839 CXX test/cpp_headers/hexlify.o 00:03:42.839 CXX test/cpp_headers/histogram_data.o 00:03:42.839 CXX test/cpp_headers/idxd.o 00:03:42.839 CXX test/cpp_headers/ioat_spec.o 00:03:42.839 CXX test/cpp_headers/init.o 00:03:42.839 CXX test/cpp_headers/idxd_spec.o 00:03:42.839 CXX test/cpp_headers/ioat.o 00:03:42.839 CXX test/cpp_headers/iscsi_spec.o 00:03:42.839 CXX test/cpp_headers/jsonrpc.o 00:03:42.839 CXX test/cpp_headers/json.o 00:03:42.839 CXX test/cpp_headers/keyring.o 00:03:42.839 CXX test/cpp_headers/keyring_module.o 00:03:42.839 CXX test/cpp_headers/likely.o 00:03:42.839 CXX test/cpp_headers/log.o 00:03:42.839 CXX test/cpp_headers/lvol.o 00:03:42.839 CXX test/cpp_headers/md5.o 00:03:42.839 CXX test/cpp_headers/mmio.o 00:03:42.839 CXX test/cpp_headers/nbd.o 00:03:42.839 CXX test/cpp_headers/net.o 00:03:42.839 CXX test/cpp_headers/memory.o 00:03:42.839 CXX test/cpp_headers/notify.o 00:03:42.839 CXX test/cpp_headers/nvme.o 00:03:42.839 CXX test/cpp_headers/nvme_intel.o 00:03:42.839 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:42.839 CXX test/cpp_headers/nvme_ocssd.o 00:03:42.839 CXX test/cpp_headers/nvme_spec.o 00:03:42.839 CXX test/cpp_headers/nvme_zns.o 00:03:42.839 CXX test/cpp_headers/nvmf.o 00:03:42.839 CXX test/cpp_headers/nvmf_cmd.o 00:03:42.839 CXX test/cpp_headers/nvmf_spec.o 00:03:42.839 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:42.839 CXX test/cpp_headers/nvmf_transport.o 00:03:42.839 CXX test/cpp_headers/opal.o 00:03:42.839 CC examples/util/zipf/zipf.o 00:03:42.839 CXX test/cpp_headers/opal_spec.o 00:03:42.839 CC test/app/histogram_perf/histogram_perf.o 00:03:42.839 CC test/thread/poller_perf/poller_perf.o 00:03:42.839 CXX test/cpp_headers/pci_ids.o 00:03:42.839 CC test/app/jsoncat/jsoncat.o 00:03:42.839 CC test/env/vtophys/vtophys.o 00:03:42.839 CC examples/ioat/perf/perf.o 00:03:42.839 CC test/env/memory/memory_ut.o 00:03:42.839 CC test/dma/test_dma/test_dma.o 00:03:42.839 CC test/app/stub/stub.o 00:03:42.839 CC app/fio/nvme/fio_plugin.o 00:03:42.839 CC test/env/pci/pci_ut.o 00:03:42.839 CC examples/ioat/verify/verify.o 00:03:42.839 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:42.840 CC test/app/bdev_svc/bdev_svc.o 00:03:43.120 CC app/fio/bdev/fio_plugin.o 00:03:43.120 LINK spdk_lspci 00:03:43.120 LINK rpc_client_test 00:03:43.120 LINK nvmf_tgt 00:03:43.120 CC test/env/mem_callbacks/mem_callbacks.o 00:03:43.120 LINK interrupt_tgt 00:03:43.381 LINK spdk_trace_record 00:03:43.381 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:43.381 LINK iscsi_tgt 00:03:43.381 LINK histogram_perf 00:03:43.381 LINK zipf 00:03:43.381 LINK spdk_tgt 00:03:43.381 CXX test/cpp_headers/pipe.o 00:03:43.381 LINK vtophys 00:03:43.381 CXX test/cpp_headers/queue.o 00:03:43.381 LINK spdk_nvme_discover 00:03:43.381 CXX test/cpp_headers/reduce.o 00:03:43.381 CXX test/cpp_headers/rpc.o 00:03:43.381 CXX test/cpp_headers/scheduler.o 00:03:43.381 CXX test/cpp_headers/scsi_spec.o 00:03:43.381 CXX test/cpp_headers/sock.o 00:03:43.381 CXX test/cpp_headers/scsi.o 00:03:43.381 CXX test/cpp_headers/stdinc.o 00:03:43.381 CXX test/cpp_headers/string.o 00:03:43.381 CXX test/cpp_headers/thread.o 00:03:43.381 CXX test/cpp_headers/trace.o 00:03:43.381 CXX test/cpp_headers/trace_parser.o 00:03:43.381 CXX test/cpp_headers/tree.o 00:03:43.381 CXX test/cpp_headers/ublk.o 00:03:43.381 CXX test/cpp_headers/util.o 00:03:43.381 CXX test/cpp_headers/uuid.o 00:03:43.381 CXX test/cpp_headers/version.o 00:03:43.381 CXX test/cpp_headers/vfio_user_pci.o 00:03:43.381 CXX test/cpp_headers/vfio_user_spec.o 00:03:43.381 CXX test/cpp_headers/vmd.o 00:03:43.381 CXX test/cpp_headers/vhost.o 00:03:43.381 LINK bdev_svc 00:03:43.381 LINK ioat_perf 00:03:43.381 CXX test/cpp_headers/xor.o 00:03:43.381 CXX test/cpp_headers/zipf.o 00:03:43.640 LINK jsoncat 00:03:43.640 LINK poller_perf 00:03:43.640 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:43.640 LINK spdk_trace 00:03:43.640 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:43.640 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:43.640 LINK stub 00:03:43.640 LINK env_dpdk_post_init 00:03:43.640 LINK verify 00:03:43.640 LINK spdk_dd 00:03:43.899 LINK nvme_fuzz 00:03:43.899 LINK spdk_nvme 00:03:43.899 LINK pci_ut 00:03:43.899 LINK test_dma 00:03:43.899 CC app/vhost/vhost.o 00:03:43.899 CC examples/vmd/lsvmd/lsvmd.o 00:03:43.899 CC examples/sock/hello_world/hello_sock.o 00:03:43.899 CC examples/vmd/led/led.o 00:03:43.899 CC examples/idxd/perf/perf.o 00:03:43.899 LINK spdk_nvme_identify 00:03:43.899 LINK mem_callbacks 00:03:43.899 LINK spdk_nvme_perf 00:03:43.899 LINK spdk_bdev 00:03:43.899 CC examples/thread/thread/thread_ex.o 00:03:43.899 LINK vhost_fuzz 00:03:44.157 CC test/event/reactor_perf/reactor_perf.o 00:03:44.157 CC test/event/reactor/reactor.o 00:03:44.157 CC test/event/event_perf/event_perf.o 00:03:44.157 CC test/event/app_repeat/app_repeat.o 00:03:44.157 CC test/event/scheduler/scheduler.o 00:03:44.157 LINK spdk_top 00:03:44.157 LINK lsvmd 00:03:44.157 LINK led 00:03:44.157 LINK vhost 00:03:44.157 LINK hello_sock 00:03:44.157 LINK reactor_perf 00:03:44.157 LINK event_perf 00:03:44.158 LINK reactor 00:03:44.158 LINK app_repeat 00:03:44.158 LINK thread 00:03:44.158 LINK idxd_perf 00:03:44.417 LINK scheduler 00:03:44.417 CC test/nvme/reserve/reserve.o 00:03:44.417 CC test/nvme/aer/aer.o 00:03:44.417 CC test/nvme/fdp/fdp.o 00:03:44.417 CC test/nvme/fused_ordering/fused_ordering.o 00:03:44.417 CC test/nvme/connect_stress/connect_stress.o 00:03:44.417 LINK memory_ut 00:03:44.417 CC test/nvme/e2edp/nvme_dp.o 00:03:44.417 CC test/nvme/cuse/cuse.o 00:03:44.417 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:44.417 CC test/nvme/compliance/nvme_compliance.o 00:03:44.417 CC test/nvme/reset/reset.o 00:03:44.417 CC test/nvme/overhead/overhead.o 00:03:44.417 CC test/nvme/err_injection/err_injection.o 00:03:44.417 CC test/nvme/simple_copy/simple_copy.o 00:03:44.417 CC test/nvme/boot_partition/boot_partition.o 00:03:44.417 CC test/nvme/startup/startup.o 00:03:44.417 CC test/nvme/sgl/sgl.o 00:03:44.417 CC test/blobfs/mkfs/mkfs.o 00:03:44.417 CC test/accel/dif/dif.o 00:03:44.417 CC test/lvol/esnap/esnap.o 00:03:44.675 LINK connect_stress 00:03:44.675 LINK startup 00:03:44.675 LINK reserve 00:03:44.675 LINK boot_partition 00:03:44.675 LINK err_injection 00:03:44.675 LINK fused_ordering 00:03:44.675 LINK doorbell_aers 00:03:44.675 CC examples/nvme/hello_world/hello_world.o 00:03:44.675 CC examples/nvme/arbitration/arbitration.o 00:03:44.675 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:44.675 CC examples/nvme/abort/abort.o 00:03:44.675 CC examples/nvme/reconnect/reconnect.o 00:03:44.675 LINK mkfs 00:03:44.675 LINK simple_copy 00:03:44.675 LINK reset 00:03:44.675 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:44.675 LINK aer 00:03:44.675 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:44.675 CC examples/nvme/hotplug/hotplug.o 00:03:44.675 LINK overhead 00:03:44.675 LINK nvme_dp 00:03:44.675 LINK sgl 00:03:44.675 LINK fdp 00:03:44.675 LINK nvme_compliance 00:03:44.675 CC examples/accel/perf/accel_perf.o 00:03:44.675 CC examples/blob/cli/blobcli.o 00:03:44.675 CC examples/blob/hello_world/hello_blob.o 00:03:44.675 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:44.933 LINK pmr_persistence 00:03:44.933 LINK cmb_copy 00:03:44.933 LINK hello_world 00:03:44.933 LINK hotplug 00:03:44.933 LINK arbitration 00:03:44.933 LINK abort 00:03:44.933 LINK reconnect 00:03:44.933 LINK iscsi_fuzz 00:03:44.933 LINK dif 00:03:44.933 LINK hello_blob 00:03:44.933 LINK nvme_manage 00:03:44.933 LINK hello_fsdev 00:03:45.192 LINK accel_perf 00:03:45.192 LINK blobcli 00:03:45.451 LINK cuse 00:03:45.451 CC test/bdev/bdevio/bdevio.o 00:03:45.710 CC examples/bdev/hello_world/hello_bdev.o 00:03:45.710 CC examples/bdev/bdevperf/bdevperf.o 00:03:45.969 LINK bdevio 00:03:45.969 LINK hello_bdev 00:03:46.228 LINK bdevperf 00:03:46.807 CC examples/nvmf/nvmf/nvmf.o 00:03:47.083 LINK nvmf 00:03:48.079 LINK esnap 00:03:48.338 00:03:48.338 real 0m55.629s 00:03:48.338 user 8m2.190s 00:03:48.338 sys 3m41.635s 00:03:48.338 12:12:10 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:48.338 12:12:10 make -- common/autotest_common.sh@10 -- $ set +x 00:03:48.338 ************************************ 00:03:48.338 END TEST make 00:03:48.338 ************************************ 00:03:48.338 12:12:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:48.338 12:12:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:48.338 12:12:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:48.338 12:12:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.338 12:12:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-cpu-load.pid ]] 00:03:48.338 12:12:10 -- pm/common@44 -- $ pid=1354160 00:03:48.338 12:12:10 -- pm/common@50 -- $ kill -TERM 1354160 00:03:48.338 12:12:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.338 12:12:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-vmstat.pid ]] 00:03:48.338 12:12:10 -- pm/common@44 -- $ pid=1354162 00:03:48.338 12:12:10 -- pm/common@50 -- $ kill -TERM 1354162 00:03:48.338 12:12:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.338 12:12:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:48.338 12:12:10 -- pm/common@44 -- $ pid=1354164 00:03:48.338 12:12:10 -- pm/common@50 -- $ kill -TERM 1354164 00:03:48.338 12:12:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.338 12:12:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:48.338 12:12:10 -- pm/common@44 -- $ pid=1354187 00:03:48.338 12:12:10 -- pm/common@50 -- $ sudo -E kill -TERM 1354187 00:03:48.338 12:12:10 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:48.338 12:12:10 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:03:48.598 12:12:10 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:48.598 12:12:10 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:48.598 12:12:10 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:48.598 12:12:10 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:48.598 12:12:10 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.598 12:12:10 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.598 12:12:10 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.598 12:12:10 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.598 12:12:10 -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.598 12:12:10 -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.598 12:12:10 -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.598 12:12:10 -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.598 12:12:10 -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.598 12:12:10 -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.598 12:12:10 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.598 12:12:10 -- scripts/common.sh@344 -- # case "$op" in 00:03:48.598 12:12:10 -- scripts/common.sh@345 -- # : 1 00:03:48.598 12:12:10 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.598 12:12:10 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.598 12:12:10 -- scripts/common.sh@365 -- # decimal 1 00:03:48.598 12:12:10 -- scripts/common.sh@353 -- # local d=1 00:03:48.598 12:12:10 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.598 12:12:10 -- scripts/common.sh@355 -- # echo 1 00:03:48.598 12:12:10 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.598 12:12:10 -- scripts/common.sh@366 -- # decimal 2 00:03:48.598 12:12:10 -- scripts/common.sh@353 -- # local d=2 00:03:48.598 12:12:10 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.598 12:12:10 -- scripts/common.sh@355 -- # echo 2 00:03:48.598 12:12:10 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.598 12:12:10 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.598 12:12:10 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.598 12:12:10 -- scripts/common.sh@368 -- # return 0 00:03:48.598 12:12:10 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.598 12:12:10 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:48.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.598 --rc genhtml_branch_coverage=1 00:03:48.598 --rc genhtml_function_coverage=1 00:03:48.598 --rc genhtml_legend=1 00:03:48.598 --rc geninfo_all_blocks=1 00:03:48.598 --rc geninfo_unexecuted_blocks=1 00:03:48.598 00:03:48.598 ' 00:03:48.598 12:12:10 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:48.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.598 --rc genhtml_branch_coverage=1 00:03:48.598 --rc genhtml_function_coverage=1 00:03:48.598 --rc genhtml_legend=1 00:03:48.598 --rc geninfo_all_blocks=1 00:03:48.598 --rc geninfo_unexecuted_blocks=1 00:03:48.598 00:03:48.598 ' 00:03:48.598 12:12:10 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:48.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.598 --rc genhtml_branch_coverage=1 00:03:48.598 --rc genhtml_function_coverage=1 00:03:48.598 --rc genhtml_legend=1 00:03:48.598 --rc geninfo_all_blocks=1 00:03:48.598 --rc geninfo_unexecuted_blocks=1 00:03:48.598 00:03:48.598 ' 00:03:48.598 12:12:10 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:48.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.598 --rc genhtml_branch_coverage=1 00:03:48.598 --rc genhtml_function_coverage=1 00:03:48.598 --rc genhtml_legend=1 00:03:48.598 --rc geninfo_all_blocks=1 00:03:48.598 --rc geninfo_unexecuted_blocks=1 00:03:48.598 00:03:48.598 ' 00:03:48.598 12:12:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:03:48.598 12:12:10 -- nvmf/common.sh@7 -- # uname -s 00:03:48.598 12:12:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.598 12:12:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.598 12:12:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.598 12:12:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.599 12:12:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.599 12:12:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.599 12:12:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.599 12:12:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.599 12:12:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.599 12:12:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.599 12:12:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:48.599 12:12:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:48.599 12:12:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.599 12:12:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.599 12:12:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:48.599 12:12:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:48.599 12:12:10 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:03:48.599 12:12:10 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:48.599 12:12:10 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.599 12:12:10 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.599 12:12:10 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.599 12:12:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.599 12:12:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.599 12:12:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.599 12:12:10 -- paths/export.sh@5 -- # export PATH 00:03:48.599 12:12:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.599 12:12:10 -- nvmf/common.sh@51 -- # : 0 00:03:48.599 12:12:10 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:48.599 12:12:10 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:48.599 12:12:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:48.599 12:12:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.599 12:12:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.599 12:12:10 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:48.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:48.599 12:12:10 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:48.599 12:12:10 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:48.599 12:12:10 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:48.599 12:12:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:48.599 12:12:10 -- spdk/autotest.sh@32 -- # uname -s 00:03:48.599 12:12:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:48.599 12:12:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:48.599 12:12:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/coredumps 00:03:48.599 12:12:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/core-collector.sh %P %s %t' 00:03:48.599 12:12:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/coredumps 00:03:48.599 12:12:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:48.599 12:12:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:48.599 12:12:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:48.599 12:12:10 -- spdk/autotest.sh@48 -- # udevadm_pid=1417145 00:03:48.599 12:12:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:48.599 12:12:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:48.599 12:12:10 -- pm/common@17 -- # local monitor 00:03:48.599 12:12:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.599 12:12:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.599 12:12:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.599 12:12:10 -- pm/common@21 -- # date +%s 00:03:48.599 12:12:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.599 12:12:10 -- pm/common@21 -- # date +%s 00:03:48.599 12:12:10 -- pm/common@25 -- # sleep 1 00:03:48.599 12:12:10 -- pm/common@21 -- # date +%s 00:03:48.599 12:12:10 -- pm/common@21 -- # date +%s 00:03:48.599 12:12:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733829130 00:03:48.599 12:12:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733829130 00:03:48.599 12:12:10 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733829130 00:03:48.599 12:12:10 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733829130 00:03:48.858 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733829130_collect-cpu-load.pm.log 00:03:48.858 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733829130_collect-vmstat.pm.log 00:03:48.858 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733829130_collect-cpu-temp.pm.log 00:03:48.859 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733829130_collect-bmc-pm.bmc.pm.log 00:03:49.796 12:12:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:49.796 12:12:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:49.796 12:12:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.796 12:12:11 -- common/autotest_common.sh@10 -- # set +x 00:03:49.796 12:12:11 -- spdk/autotest.sh@59 -- # create_test_list 00:03:49.796 12:12:11 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:49.796 12:12:11 -- common/autotest_common.sh@10 -- # set +x 00:03:49.796 12:12:11 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autotest.sh 00:03:49.796 12:12:11 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:03:49.796 12:12:11 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:03:49.796 12:12:11 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output 00:03:49.796 12:12:11 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:03:49.796 12:12:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:49.796 12:12:11 -- common/autotest_common.sh@1457 -- # uname 00:03:49.796 12:12:11 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:49.797 12:12:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:49.797 12:12:11 -- common/autotest_common.sh@1477 -- # uname 00:03:49.797 12:12:11 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:49.797 12:12:11 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:49.797 12:12:11 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:49.797 lcov: LCOV version 1.15 00:03:49.797 12:12:11 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_base.info 00:04:04.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.684 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/nvme/nvme_stubs.gcno 00:04:16.896 12:12:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:16.896 12:12:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.896 12:12:37 -- common/autotest_common.sh@10 -- # set +x 00:04:16.896 12:12:37 -- spdk/autotest.sh@78 -- # rm -f 00:04:16.896 12:12:37 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:04:18.277 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:18.277 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:18.277 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:18.537 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:18.537 12:12:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:18.537 12:12:40 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:18.537 12:12:40 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:18.537 12:12:40 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:18.537 12:12:40 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:18.537 12:12:40 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:18.537 12:12:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:18.537 12:12:40 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:18.537 12:12:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:18.538 12:12:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:18.538 12:12:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:18.538 12:12:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.538 12:12:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:18.538 12:12:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:18.538 12:12:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.538 12:12:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.538 12:12:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:18.538 12:12:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:18.538 12:12:40 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:18.538 No valid GPT data, bailing 00:04:18.538 12:12:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.538 12:12:40 -- scripts/common.sh@394 -- # pt= 00:04:18.538 12:12:40 -- scripts/common.sh@395 -- # return 1 00:04:18.538 12:12:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:18.538 1+0 records in 00:04:18.538 1+0 records out 00:04:18.538 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00409301 s, 256 MB/s 00:04:18.538 12:12:40 -- spdk/autotest.sh@105 -- # sync 00:04:18.538 12:12:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:18.538 12:12:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:18.538 12:12:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:25.112 12:12:46 -- spdk/autotest.sh@111 -- # uname -s 00:04:25.112 12:12:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:25.112 12:12:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:25.112 12:12:46 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh status 00:04:27.020 Hugepages 00:04:27.020 node hugesize free / total 00:04:27.020 node0 1048576kB 0 / 0 00:04:27.020 node0 2048kB 0 / 0 00:04:27.020 node1 1048576kB 0 / 0 00:04:27.020 node1 2048kB 0 / 0 00:04:27.020 00:04:27.020 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:27.020 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:27.020 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:27.020 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:27.020 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:27.020 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:27.020 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:27.020 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:27.020 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:27.020 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:27.020 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:27.020 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:27.020 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:27.020 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:27.020 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:27.020 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:27.020 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:27.020 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:27.020 12:12:49 -- spdk/autotest.sh@117 -- # uname -s 00:04:27.020 12:12:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:27.020 12:12:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:27.020 12:12:49 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:04:30.312 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:30.312 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:30.882 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:30.882 12:12:52 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:32.262 12:12:53 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:32.262 12:12:53 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:32.262 12:12:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:32.262 12:12:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:32.262 12:12:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:32.262 12:12:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:32.262 12:12:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.262 12:12:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:04:32.262 12:12:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:32.262 12:12:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:32.262 12:12:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:32.262 12:12:54 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:04:34.801 Waiting for block devices as requested 00:04:34.801 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:35.061 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:35.061 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:35.061 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:35.061 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:35.320 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:35.320 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:35.320 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:35.580 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:35.580 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:35.580 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:35.839 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:35.839 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:35.839 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:35.839 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:36.098 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:36.098 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:36.098 12:12:58 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:36.098 12:12:58 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:36.098 12:12:58 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:36.098 12:12:58 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:36.098 12:12:58 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:36.098 12:12:58 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:36.098 12:12:58 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:36.098 12:12:58 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:36.099 12:12:58 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:36.099 12:12:58 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:36.099 12:12:58 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:36.099 12:12:58 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:36.099 12:12:58 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:36.099 12:12:58 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:36.099 12:12:58 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:36.099 12:12:58 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:36.099 12:12:58 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:36.099 12:12:58 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:36.099 12:12:58 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:36.358 12:12:58 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:36.358 12:12:58 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:36.358 12:12:58 -- common/autotest_common.sh@1543 -- # continue 00:04:36.358 12:12:58 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:36.358 12:12:58 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.358 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:04:36.358 12:12:58 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:36.358 12:12:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.358 12:12:58 -- common/autotest_common.sh@10 -- # set +x 00:04:36.358 12:12:58 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:04:39.648 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:39.648 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:39.908 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:40.166 12:13:02 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:40.166 12:13:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:40.166 12:13:02 -- common/autotest_common.sh@10 -- # set +x 00:04:40.166 12:13:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:40.166 12:13:02 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:40.166 12:13:02 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:40.166 12:13:02 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:40.166 12:13:02 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:40.166 12:13:02 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:40.166 12:13:02 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:40.166 12:13:02 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:40.166 12:13:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:40.166 12:13:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:40.167 12:13:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.167 12:13:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:04:40.167 12:13:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:40.426 12:13:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:40.426 12:13:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:40.426 12:13:02 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:40.426 12:13:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:40.426 12:13:02 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:40.426 12:13:02 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:40.426 12:13:02 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:40.426 12:13:02 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:40.426 12:13:02 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:40.426 12:13:02 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:40.426 12:13:02 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1431603 00:04:40.426 12:13:02 -- common/autotest_common.sh@1585 -- # waitforlisten 1431603 00:04:40.426 12:13:02 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:40.426 12:13:02 -- common/autotest_common.sh@835 -- # '[' -z 1431603 ']' 00:04:40.426 12:13:02 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.426 12:13:02 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.426 12:13:02 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.426 12:13:02 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.426 12:13:02 -- common/autotest_common.sh@10 -- # set +x 00:04:40.426 [2024-12-10 12:13:02.401320] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:04:40.426 [2024-12-10 12:13:02.401372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431603 ] 00:04:40.426 [2024-12-10 12:13:02.478969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.426 [2024-12-10 12:13:02.520633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.685 12:13:02 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.685 12:13:02 -- common/autotest_common.sh@868 -- # return 0 00:04:40.685 12:13:02 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:40.685 12:13:02 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:40.685 12:13:02 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:43.976 nvme0n1 00:04:43.976 12:13:05 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:43.976 [2024-12-10 12:13:05.907939] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:43.976 request: 00:04:43.976 { 00:04:43.976 "nvme_ctrlr_name": "nvme0", 00:04:43.976 "password": "test", 00:04:43.976 "method": "bdev_nvme_opal_revert", 00:04:43.976 "req_id": 1 00:04:43.976 } 00:04:43.976 Got JSON-RPC error response 00:04:43.976 response: 00:04:43.976 { 00:04:43.976 "code": -32602, 00:04:43.976 "message": "Invalid parameters" 00:04:43.976 } 00:04:43.976 12:13:05 -- common/autotest_common.sh@1591 -- # true 00:04:43.976 12:13:05 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:43.976 12:13:05 -- common/autotest_common.sh@1595 -- # killprocess 1431603 00:04:43.976 12:13:05 -- common/autotest_common.sh@954 -- # '[' -z 1431603 ']' 00:04:43.976 12:13:05 -- common/autotest_common.sh@958 -- # kill -0 1431603 00:04:43.976 12:13:05 -- common/autotest_common.sh@959 -- # uname 00:04:43.976 12:13:05 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.976 12:13:05 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1431603 00:04:43.976 12:13:05 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.976 12:13:05 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.976 12:13:05 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1431603' 00:04:43.976 killing process with pid 1431603 00:04:43.976 12:13:05 -- common/autotest_common.sh@973 -- # kill 1431603 00:04:43.976 12:13:05 -- common/autotest_common.sh@978 -- # wait 1431603 00:04:45.882 12:13:07 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:45.882 12:13:07 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:45.882 12:13:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:45.882 12:13:07 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:45.882 12:13:07 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:45.882 12:13:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.882 12:13:07 -- common/autotest_common.sh@10 -- # set +x 00:04:45.882 12:13:07 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:45.882 12:13:07 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env.sh 00:04:45.882 12:13:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.882 12:13:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.882 12:13:07 -- common/autotest_common.sh@10 -- # set +x 00:04:45.882 ************************************ 00:04:45.882 START TEST env 00:04:45.882 ************************************ 00:04:45.882 12:13:07 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env.sh 00:04:45.882 * Looking for test storage... 00:04:45.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env 00:04:45.882 12:13:07 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.882 12:13:07 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.882 12:13:07 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.882 12:13:07 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.882 12:13:07 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.882 12:13:07 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.882 12:13:07 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.882 12:13:07 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.882 12:13:07 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.882 12:13:07 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.882 12:13:07 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.882 12:13:07 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.882 12:13:07 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.882 12:13:07 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.882 12:13:07 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.882 12:13:07 env -- scripts/common.sh@344 -- # case "$op" in 00:04:45.882 12:13:07 env -- scripts/common.sh@345 -- # : 1 00:04:45.882 12:13:07 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.882 12:13:07 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.882 12:13:07 env -- scripts/common.sh@365 -- # decimal 1 00:04:45.882 12:13:07 env -- scripts/common.sh@353 -- # local d=1 00:04:45.882 12:13:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.882 12:13:07 env -- scripts/common.sh@355 -- # echo 1 00:04:45.882 12:13:07 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.882 12:13:07 env -- scripts/common.sh@366 -- # decimal 2 00:04:45.882 12:13:07 env -- scripts/common.sh@353 -- # local d=2 00:04:45.882 12:13:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.882 12:13:07 env -- scripts/common.sh@355 -- # echo 2 00:04:45.882 12:13:07 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.882 12:13:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.882 12:13:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.882 12:13:07 env -- scripts/common.sh@368 -- # return 0 00:04:45.882 12:13:07 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.882 12:13:07 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.882 --rc genhtml_branch_coverage=1 00:04:45.882 --rc genhtml_function_coverage=1 00:04:45.882 --rc genhtml_legend=1 00:04:45.882 --rc geninfo_all_blocks=1 00:04:45.882 --rc geninfo_unexecuted_blocks=1 00:04:45.882 00:04:45.882 ' 00:04:45.882 12:13:07 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.882 --rc genhtml_branch_coverage=1 00:04:45.882 --rc genhtml_function_coverage=1 00:04:45.882 --rc genhtml_legend=1 00:04:45.882 --rc geninfo_all_blocks=1 00:04:45.882 --rc geninfo_unexecuted_blocks=1 00:04:45.882 00:04:45.882 ' 00:04:45.882 12:13:07 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.883 --rc genhtml_branch_coverage=1 00:04:45.883 --rc genhtml_function_coverage=1 00:04:45.883 --rc genhtml_legend=1 00:04:45.883 --rc geninfo_all_blocks=1 00:04:45.883 --rc geninfo_unexecuted_blocks=1 00:04:45.883 00:04:45.883 ' 00:04:45.883 12:13:07 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.883 --rc genhtml_branch_coverage=1 00:04:45.883 --rc genhtml_function_coverage=1 00:04:45.883 --rc genhtml_legend=1 00:04:45.883 --rc geninfo_all_blocks=1 00:04:45.883 --rc geninfo_unexecuted_blocks=1 00:04:45.883 00:04:45.883 ' 00:04:45.883 12:13:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/memory/memory_ut 00:04:45.883 12:13:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.883 12:13:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.883 12:13:07 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.883 ************************************ 00:04:45.883 START TEST env_memory 00:04:45.883 ************************************ 00:04:45.883 12:13:07 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/memory/memory_ut 00:04:45.883 00:04:45.883 00:04:45.883 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.883 http://cunit.sourceforge.net/ 00:04:45.883 00:04:45.883 00:04:45.883 Suite: memory 00:04:45.883 Test: alloc and free memory map ...[2024-12-10 12:13:07.862296] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:45.883 passed 00:04:45.883 Test: mem map translation ...[2024-12-10 12:13:07.881863] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:45.883 [2024-12-10 12:13:07.881879] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:45.883 [2024-12-10 12:13:07.881916] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:45.883 [2024-12-10 12:13:07.881922] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:45.883 passed 00:04:45.883 Test: mem map registration ...[2024-12-10 12:13:07.921811] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:45.883 [2024-12-10 12:13:07.921825] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:45.883 passed 00:04:45.883 Test: mem map adjacent registrations ...passed 00:04:45.883 00:04:45.883 Run Summary: Type Total Ran Passed Failed Inactive 00:04:45.883 suites 1 1 n/a 0 0 00:04:45.883 tests 4 4 4 0 0 00:04:45.883 asserts 152 152 152 0 n/a 00:04:45.883 00:04:45.883 Elapsed time = 0.143 seconds 00:04:45.883 00:04:45.883 real 0m0.156s 00:04:45.883 user 0m0.147s 00:04:45.883 sys 0m0.008s 00:04:45.883 12:13:07 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.883 12:13:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:45.883 ************************************ 00:04:45.883 END TEST env_memory 00:04:45.883 ************************************ 00:04:45.883 12:13:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/vtophys/vtophys 00:04:45.883 12:13:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.883 12:13:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.883 12:13:08 env -- common/autotest_common.sh@10 -- # set +x 00:04:45.883 ************************************ 00:04:45.883 START TEST env_vtophys 00:04:45.883 ************************************ 00:04:45.883 12:13:08 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/vtophys/vtophys 00:04:46.143 EAL: lib.eal log level changed from notice to debug 00:04:46.143 EAL: Detected lcore 0 as core 0 on socket 0 00:04:46.143 EAL: Detected lcore 1 as core 1 on socket 0 00:04:46.143 EAL: Detected lcore 2 as core 2 on socket 0 00:04:46.143 EAL: Detected lcore 3 as core 3 on socket 0 00:04:46.143 EAL: Detected lcore 4 as core 4 on socket 0 00:04:46.143 EAL: Detected lcore 5 as core 5 on socket 0 00:04:46.143 EAL: Detected lcore 6 as core 6 on socket 0 00:04:46.143 EAL: Detected lcore 7 as core 8 on socket 0 00:04:46.143 EAL: Detected lcore 8 as core 9 on socket 0 00:04:46.143 EAL: Detected lcore 9 as core 10 on socket 0 00:04:46.143 EAL: Detected lcore 10 as core 11 on socket 0 00:04:46.143 EAL: Detected lcore 11 as core 12 on socket 0 00:04:46.143 EAL: Detected lcore 12 as core 13 on socket 0 00:04:46.143 EAL: Detected lcore 13 as core 16 on socket 0 00:04:46.143 EAL: Detected lcore 14 as core 17 on socket 0 00:04:46.143 EAL: Detected lcore 15 as core 18 on socket 0 00:04:46.143 EAL: Detected lcore 16 as core 19 on socket 0 00:04:46.143 EAL: Detected lcore 17 as core 20 on socket 0 00:04:46.143 EAL: Detected lcore 18 as core 21 on socket 0 00:04:46.143 EAL: Detected lcore 19 as core 25 on socket 0 00:04:46.143 EAL: Detected lcore 20 as core 26 on socket 0 00:04:46.143 EAL: Detected lcore 21 as core 27 on socket 0 00:04:46.143 EAL: Detected lcore 22 as core 28 on socket 0 00:04:46.143 EAL: Detected lcore 23 as core 29 on socket 0 00:04:46.143 EAL: Detected lcore 24 as core 0 on socket 1 00:04:46.143 EAL: Detected lcore 25 as core 1 on socket 1 00:04:46.143 EAL: Detected lcore 26 as core 2 on socket 1 00:04:46.143 EAL: Detected lcore 27 as core 3 on socket 1 00:04:46.143 EAL: Detected lcore 28 as core 4 on socket 1 00:04:46.143 EAL: Detected lcore 29 as core 5 on socket 1 00:04:46.143 EAL: Detected lcore 30 as core 6 on socket 1 00:04:46.143 EAL: Detected lcore 31 as core 9 on socket 1 00:04:46.143 EAL: Detected lcore 32 as core 10 on socket 1 00:04:46.143 EAL: Detected lcore 33 as core 11 on socket 1 00:04:46.143 EAL: Detected lcore 34 as core 12 on socket 1 00:04:46.143 EAL: Detected lcore 35 as core 13 on socket 1 00:04:46.143 EAL: Detected lcore 36 as core 16 on socket 1 00:04:46.143 EAL: Detected lcore 37 as core 17 on socket 1 00:04:46.143 EAL: Detected lcore 38 as core 18 on socket 1 00:04:46.143 EAL: Detected lcore 39 as core 19 on socket 1 00:04:46.143 EAL: Detected lcore 40 as core 20 on socket 1 00:04:46.143 EAL: Detected lcore 41 as core 21 on socket 1 00:04:46.143 EAL: Detected lcore 42 as core 24 on socket 1 00:04:46.143 EAL: Detected lcore 43 as core 25 on socket 1 00:04:46.143 EAL: Detected lcore 44 as core 26 on socket 1 00:04:46.143 EAL: Detected lcore 45 as core 27 on socket 1 00:04:46.143 EAL: Detected lcore 46 as core 28 on socket 1 00:04:46.143 EAL: Detected lcore 47 as core 29 on socket 1 00:04:46.143 EAL: Detected lcore 48 as core 0 on socket 0 00:04:46.143 EAL: Detected lcore 49 as core 1 on socket 0 00:04:46.143 EAL: Detected lcore 50 as core 2 on socket 0 00:04:46.143 EAL: Detected lcore 51 as core 3 on socket 0 00:04:46.143 EAL: Detected lcore 52 as core 4 on socket 0 00:04:46.143 EAL: Detected lcore 53 as core 5 on socket 0 00:04:46.143 EAL: Detected lcore 54 as core 6 on socket 0 00:04:46.143 EAL: Detected lcore 55 as core 8 on socket 0 00:04:46.143 EAL: Detected lcore 56 as core 9 on socket 0 00:04:46.143 EAL: Detected lcore 57 as core 10 on socket 0 00:04:46.143 EAL: Detected lcore 58 as core 11 on socket 0 00:04:46.143 EAL: Detected lcore 59 as core 12 on socket 0 00:04:46.143 EAL: Detected lcore 60 as core 13 on socket 0 00:04:46.143 EAL: Detected lcore 61 as core 16 on socket 0 00:04:46.143 EAL: Detected lcore 62 as core 17 on socket 0 00:04:46.143 EAL: Detected lcore 63 as core 18 on socket 0 00:04:46.143 EAL: Detected lcore 64 as core 19 on socket 0 00:04:46.143 EAL: Detected lcore 65 as core 20 on socket 0 00:04:46.143 EAL: Detected lcore 66 as core 21 on socket 0 00:04:46.143 EAL: Detected lcore 67 as core 25 on socket 0 00:04:46.143 EAL: Detected lcore 68 as core 26 on socket 0 00:04:46.143 EAL: Detected lcore 69 as core 27 on socket 0 00:04:46.143 EAL: Detected lcore 70 as core 28 on socket 0 00:04:46.143 EAL: Detected lcore 71 as core 29 on socket 0 00:04:46.143 EAL: Detected lcore 72 as core 0 on socket 1 00:04:46.143 EAL: Detected lcore 73 as core 1 on socket 1 00:04:46.143 EAL: Detected lcore 74 as core 2 on socket 1 00:04:46.143 EAL: Detected lcore 75 as core 3 on socket 1 00:04:46.143 EAL: Detected lcore 76 as core 4 on socket 1 00:04:46.143 EAL: Detected lcore 77 as core 5 on socket 1 00:04:46.143 EAL: Detected lcore 78 as core 6 on socket 1 00:04:46.143 EAL: Detected lcore 79 as core 9 on socket 1 00:04:46.143 EAL: Detected lcore 80 as core 10 on socket 1 00:04:46.143 EAL: Detected lcore 81 as core 11 on socket 1 00:04:46.143 EAL: Detected lcore 82 as core 12 on socket 1 00:04:46.143 EAL: Detected lcore 83 as core 13 on socket 1 00:04:46.143 EAL: Detected lcore 84 as core 16 on socket 1 00:04:46.143 EAL: Detected lcore 85 as core 17 on socket 1 00:04:46.143 EAL: Detected lcore 86 as core 18 on socket 1 00:04:46.143 EAL: Detected lcore 87 as core 19 on socket 1 00:04:46.143 EAL: Detected lcore 88 as core 20 on socket 1 00:04:46.143 EAL: Detected lcore 89 as core 21 on socket 1 00:04:46.143 EAL: Detected lcore 90 as core 24 on socket 1 00:04:46.143 EAL: Detected lcore 91 as core 25 on socket 1 00:04:46.143 EAL: Detected lcore 92 as core 26 on socket 1 00:04:46.143 EAL: Detected lcore 93 as core 27 on socket 1 00:04:46.143 EAL: Detected lcore 94 as core 28 on socket 1 00:04:46.143 EAL: Detected lcore 95 as core 29 on socket 1 00:04:46.143 EAL: Maximum logical cores by configuration: 128 00:04:46.143 EAL: Detected CPU lcores: 96 00:04:46.143 EAL: Detected NUMA nodes: 2 00:04:46.143 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:46.143 EAL: Detected shared linkage of DPDK 00:04:46.143 EAL: No shared files mode enabled, IPC will be disabled 00:04:46.143 EAL: Bus pci wants IOVA as 'DC' 00:04:46.143 EAL: Buses did not request a specific IOVA mode. 00:04:46.143 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:46.143 EAL: Selected IOVA mode 'VA' 00:04:46.143 EAL: Probing VFIO support... 00:04:46.143 EAL: IOMMU type 1 (Type 1) is supported 00:04:46.143 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:46.143 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:46.143 EAL: VFIO support initialized 00:04:46.143 EAL: Ask a virtual area of 0x2e000 bytes 00:04:46.143 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:46.143 EAL: Setting up physically contiguous memory... 00:04:46.143 EAL: Setting maximum number of open files to 524288 00:04:46.143 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:46.143 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:46.143 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:46.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.143 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:46.143 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.143 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:46.143 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:46.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.143 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:46.143 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.143 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:46.143 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:46.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.143 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:46.143 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.143 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:46.143 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:46.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.143 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:46.143 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.143 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:46.143 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:46.143 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:46.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.143 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:46.143 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.143 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:46.143 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:46.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.143 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:46.143 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.143 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:46.143 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:46.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.143 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:46.143 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.143 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.143 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:46.143 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:46.143 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.143 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:46.144 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:46.144 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.144 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:46.144 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:46.144 EAL: Hugepages will be freed exactly as allocated. 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: TSC frequency is ~2300000 KHz 00:04:46.144 EAL: Main lcore 0 is ready (tid=7efd23ecda00;cpuset=[0]) 00:04:46.144 EAL: Trying to obtain current memory policy. 00:04:46.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.144 EAL: Restoring previous memory policy: 0 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was expanded by 2MB 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:46.144 EAL: Mem event callback 'spdk:(nil)' registered 00:04:46.144 00:04:46.144 00:04:46.144 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.144 http://cunit.sourceforge.net/ 00:04:46.144 00:04:46.144 00:04:46.144 Suite: components_suite 00:04:46.144 Test: vtophys_malloc_test ...passed 00:04:46.144 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:46.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.144 EAL: Restoring previous memory policy: 4 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was expanded by 4MB 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was shrunk by 4MB 00:04:46.144 EAL: Trying to obtain current memory policy. 00:04:46.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.144 EAL: Restoring previous memory policy: 4 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was expanded by 6MB 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was shrunk by 6MB 00:04:46.144 EAL: Trying to obtain current memory policy. 00:04:46.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.144 EAL: Restoring previous memory policy: 4 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was expanded by 10MB 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was shrunk by 10MB 00:04:46.144 EAL: Trying to obtain current memory policy. 00:04:46.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.144 EAL: Restoring previous memory policy: 4 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was expanded by 18MB 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was shrunk by 18MB 00:04:46.144 EAL: Trying to obtain current memory policy. 00:04:46.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.144 EAL: Restoring previous memory policy: 4 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was expanded by 34MB 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was shrunk by 34MB 00:04:46.144 EAL: Trying to obtain current memory policy. 00:04:46.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.144 EAL: Restoring previous memory policy: 4 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was expanded by 66MB 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was shrunk by 66MB 00:04:46.144 EAL: Trying to obtain current memory policy. 00:04:46.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.144 EAL: Restoring previous memory policy: 4 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was expanded by 130MB 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was shrunk by 130MB 00:04:46.144 EAL: Trying to obtain current memory policy. 00:04:46.144 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.144 EAL: Restoring previous memory policy: 4 00:04:46.144 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.144 EAL: request: mp_malloc_sync 00:04:46.144 EAL: No shared files mode enabled, IPC is disabled 00:04:46.144 EAL: Heap on socket 0 was expanded by 258MB 00:04:46.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.404 EAL: request: mp_malloc_sync 00:04:46.404 EAL: No shared files mode enabled, IPC is disabled 00:04:46.404 EAL: Heap on socket 0 was shrunk by 258MB 00:04:46.404 EAL: Trying to obtain current memory policy. 00:04:46.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.404 EAL: Restoring previous memory policy: 4 00:04:46.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.404 EAL: request: mp_malloc_sync 00:04:46.404 EAL: No shared files mode enabled, IPC is disabled 00:04:46.404 EAL: Heap on socket 0 was expanded by 514MB 00:04:46.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.663 EAL: request: mp_malloc_sync 00:04:46.663 EAL: No shared files mode enabled, IPC is disabled 00:04:46.663 EAL: Heap on socket 0 was shrunk by 514MB 00:04:46.663 EAL: Trying to obtain current memory policy. 00:04:46.663 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.663 EAL: Restoring previous memory policy: 4 00:04:46.663 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.663 EAL: request: mp_malloc_sync 00:04:46.663 EAL: No shared files mode enabled, IPC is disabled 00:04:46.663 EAL: Heap on socket 0 was expanded by 1026MB 00:04:46.922 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.182 EAL: request: mp_malloc_sync 00:04:47.182 EAL: No shared files mode enabled, IPC is disabled 00:04:47.182 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:47.182 passed 00:04:47.182 00:04:47.182 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.182 suites 1 1 n/a 0 0 00:04:47.182 tests 2 2 2 0 0 00:04:47.182 asserts 497 497 497 0 n/a 00:04:47.182 00:04:47.182 Elapsed time = 0.974 seconds 00:04:47.182 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.182 EAL: request: mp_malloc_sync 00:04:47.182 EAL: No shared files mode enabled, IPC is disabled 00:04:47.182 EAL: Heap on socket 0 was shrunk by 2MB 00:04:47.182 EAL: No shared files mode enabled, IPC is disabled 00:04:47.182 EAL: No shared files mode enabled, IPC is disabled 00:04:47.182 EAL: No shared files mode enabled, IPC is disabled 00:04:47.182 00:04:47.182 real 0m1.101s 00:04:47.182 user 0m0.651s 00:04:47.182 sys 0m0.427s 00:04:47.182 12:13:09 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.182 12:13:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:47.182 ************************************ 00:04:47.182 END TEST env_vtophys 00:04:47.182 ************************************ 00:04:47.182 12:13:09 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/pci/pci_ut 00:04:47.182 12:13:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.182 12:13:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.182 12:13:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.182 ************************************ 00:04:47.182 START TEST env_pci 00:04:47.182 ************************************ 00:04:47.182 12:13:09 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/pci/pci_ut 00:04:47.182 00:04:47.182 00:04:47.182 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.182 http://cunit.sourceforge.net/ 00:04:47.182 00:04:47.182 00:04:47.182 Suite: pci 00:04:47.182 Test: pci_hook ...[2024-12-10 12:13:09.232343] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1432865 has claimed it 00:04:47.182 EAL: Cannot find device (10000:00:01.0) 00:04:47.182 EAL: Failed to attach device on primary process 00:04:47.182 passed 00:04:47.182 00:04:47.182 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.182 suites 1 1 n/a 0 0 00:04:47.182 tests 1 1 1 0 0 00:04:47.182 asserts 25 25 25 0 n/a 00:04:47.182 00:04:47.182 Elapsed time = 0.027 seconds 00:04:47.182 00:04:47.182 real 0m0.046s 00:04:47.182 user 0m0.015s 00:04:47.182 sys 0m0.030s 00:04:47.182 12:13:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.182 12:13:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:47.182 ************************************ 00:04:47.182 END TEST env_pci 00:04:47.182 ************************************ 00:04:47.182 12:13:09 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:47.182 12:13:09 env -- env/env.sh@15 -- # uname 00:04:47.182 12:13:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:47.182 12:13:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:47.182 12:13:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.182 12:13:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:47.182 12:13:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.182 12:13:09 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.182 ************************************ 00:04:47.182 START TEST env_dpdk_post_init 00:04:47.182 ************************************ 00:04:47.182 12:13:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:47.442 EAL: Detected CPU lcores: 96 00:04:47.442 EAL: Detected NUMA nodes: 2 00:04:47.442 EAL: Detected shared linkage of DPDK 00:04:47.442 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:47.442 EAL: Selected IOVA mode 'VA' 00:04:47.442 EAL: VFIO support initialized 00:04:47.442 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:47.442 EAL: Using IOMMU type 1 (Type 1) 00:04:47.442 EAL: Ignore mapping IO port bar(1) 00:04:47.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:47.442 EAL: Ignore mapping IO port bar(1) 00:04:47.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:47.442 EAL: Ignore mapping IO port bar(1) 00:04:47.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:47.442 EAL: Ignore mapping IO port bar(1) 00:04:47.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:47.442 EAL: Ignore mapping IO port bar(1) 00:04:47.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:47.442 EAL: Ignore mapping IO port bar(1) 00:04:47.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:47.442 EAL: Ignore mapping IO port bar(1) 00:04:47.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:47.442 EAL: Ignore mapping IO port bar(1) 00:04:47.442 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:48.380 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:48.380 EAL: Ignore mapping IO port bar(1) 00:04:48.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:48.380 EAL: Ignore mapping IO port bar(1) 00:04:48.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:48.380 EAL: Ignore mapping IO port bar(1) 00:04:48.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:48.380 EAL: Ignore mapping IO port bar(1) 00:04:48.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:48.380 EAL: Ignore mapping IO port bar(1) 00:04:48.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:48.380 EAL: Ignore mapping IO port bar(1) 00:04:48.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:48.380 EAL: Ignore mapping IO port bar(1) 00:04:48.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:48.380 EAL: Ignore mapping IO port bar(1) 00:04:48.380 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:51.668 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:51.668 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:51.668 Starting DPDK initialization... 00:04:51.668 Starting SPDK post initialization... 00:04:51.668 SPDK NVMe probe 00:04:51.668 Attaching to 0000:5e:00.0 00:04:51.668 Attached to 0000:5e:00.0 00:04:51.668 Cleaning up... 00:04:51.668 00:04:51.668 real 0m4.374s 00:04:51.668 user 0m2.977s 00:04:51.668 sys 0m0.469s 00:04:51.668 12:13:13 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.668 12:13:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.668 ************************************ 00:04:51.668 END TEST env_dpdk_post_init 00:04:51.668 ************************************ 00:04:51.668 12:13:13 env -- env/env.sh@26 -- # uname 00:04:51.668 12:13:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.668 12:13:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.668 12:13:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.668 12:13:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.668 12:13:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.668 ************************************ 00:04:51.668 START TEST env_mem_callbacks 00:04:51.668 ************************************ 00:04:51.668 12:13:13 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.668 EAL: Detected CPU lcores: 96 00:04:51.668 EAL: Detected NUMA nodes: 2 00:04:51.668 EAL: Detected shared linkage of DPDK 00:04:51.668 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.927 EAL: Selected IOVA mode 'VA' 00:04:51.927 EAL: VFIO support initialized 00:04:51.927 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.927 00:04:51.927 00:04:51.927 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.927 http://cunit.sourceforge.net/ 00:04:51.927 00:04:51.927 00:04:51.927 Suite: memory 00:04:51.927 Test: test ... 00:04:51.927 register 0x200000200000 2097152 00:04:51.927 malloc 3145728 00:04:51.927 register 0x200000400000 4194304 00:04:51.927 buf 0x200000500000 len 3145728 PASSED 00:04:51.927 malloc 64 00:04:51.927 buf 0x2000004fff40 len 64 PASSED 00:04:51.927 malloc 4194304 00:04:51.927 register 0x200000800000 6291456 00:04:51.927 buf 0x200000a00000 len 4194304 PASSED 00:04:51.927 free 0x200000500000 3145728 00:04:51.927 free 0x2000004fff40 64 00:04:51.927 unregister 0x200000400000 4194304 PASSED 00:04:51.927 free 0x200000a00000 4194304 00:04:51.927 unregister 0x200000800000 6291456 PASSED 00:04:51.927 malloc 8388608 00:04:51.927 register 0x200000400000 10485760 00:04:51.927 buf 0x200000600000 len 8388608 PASSED 00:04:51.927 free 0x200000600000 8388608 00:04:51.927 unregister 0x200000400000 10485760 PASSED 00:04:51.927 passed 00:04:51.927 00:04:51.927 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.927 suites 1 1 n/a 0 0 00:04:51.927 tests 1 1 1 0 0 00:04:51.927 asserts 15 15 15 0 n/a 00:04:51.927 00:04:51.927 Elapsed time = 0.008 seconds 00:04:51.927 00:04:51.927 real 0m0.061s 00:04:51.928 user 0m0.019s 00:04:51.928 sys 0m0.041s 00:04:51.928 12:13:13 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.928 12:13:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:51.928 ************************************ 00:04:51.928 END TEST env_mem_callbacks 00:04:51.928 ************************************ 00:04:51.928 00:04:51.928 real 0m6.283s 00:04:51.928 user 0m4.064s 00:04:51.928 sys 0m1.302s 00:04:51.928 12:13:13 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.928 12:13:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.928 ************************************ 00:04:51.928 END TEST env 00:04:51.928 ************************************ 00:04:51.928 12:13:13 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/rpc.sh 00:04:51.928 12:13:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.928 12:13:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.928 12:13:13 -- common/autotest_common.sh@10 -- # set +x 00:04:51.928 ************************************ 00:04:51.928 START TEST rpc 00:04:51.928 ************************************ 00:04:51.928 12:13:13 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/rpc.sh 00:04:51.928 * Looking for test storage... 00:04:51.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:04:51.928 12:13:14 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:51.928 12:13:14 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:51.928 12:13:14 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.187 12:13:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.187 12:13:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.187 12:13:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.187 12:13:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.187 12:13:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.187 12:13:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.187 12:13:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.187 12:13:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.187 12:13:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.187 12:13:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.187 12:13:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.187 12:13:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:52.187 12:13:14 rpc -- scripts/common.sh@345 -- # : 1 00:04:52.187 12:13:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.187 12:13:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.187 12:13:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:52.187 12:13:14 rpc -- scripts/common.sh@353 -- # local d=1 00:04:52.187 12:13:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.187 12:13:14 rpc -- scripts/common.sh@355 -- # echo 1 00:04:52.187 12:13:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.187 12:13:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:52.187 12:13:14 rpc -- scripts/common.sh@353 -- # local d=2 00:04:52.187 12:13:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.187 12:13:14 rpc -- scripts/common.sh@355 -- # echo 2 00:04:52.187 12:13:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.187 12:13:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.187 12:13:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.187 12:13:14 rpc -- scripts/common.sh@368 -- # return 0 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.187 --rc genhtml_branch_coverage=1 00:04:52.187 --rc genhtml_function_coverage=1 00:04:52.187 --rc genhtml_legend=1 00:04:52.187 --rc geninfo_all_blocks=1 00:04:52.187 --rc geninfo_unexecuted_blocks=1 00:04:52.187 00:04:52.187 ' 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.187 --rc genhtml_branch_coverage=1 00:04:52.187 --rc genhtml_function_coverage=1 00:04:52.187 --rc genhtml_legend=1 00:04:52.187 --rc geninfo_all_blocks=1 00:04:52.187 --rc geninfo_unexecuted_blocks=1 00:04:52.187 00:04:52.187 ' 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.187 --rc genhtml_branch_coverage=1 00:04:52.187 --rc genhtml_function_coverage=1 00:04:52.187 --rc genhtml_legend=1 00:04:52.187 --rc geninfo_all_blocks=1 00:04:52.187 --rc geninfo_unexecuted_blocks=1 00:04:52.187 00:04:52.187 ' 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.187 --rc genhtml_branch_coverage=1 00:04:52.187 --rc genhtml_function_coverage=1 00:04:52.187 --rc genhtml_legend=1 00:04:52.187 --rc geninfo_all_blocks=1 00:04:52.187 --rc geninfo_unexecuted_blocks=1 00:04:52.187 00:04:52.187 ' 00:04:52.187 12:13:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1433755 00:04:52.187 12:13:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.187 12:13:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -e bdev 00:04:52.187 12:13:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1433755 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@835 -- # '[' -z 1433755 ']' 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.187 12:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.187 [2024-12-10 12:13:14.193438] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:04:52.187 [2024-12-10 12:13:14.193483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433755 ] 00:04:52.187 [2024-12-10 12:13:14.268717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.187 [2024-12-10 12:13:14.309331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:52.187 [2024-12-10 12:13:14.309366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1433755' to capture a snapshot of events at runtime. 00:04:52.187 [2024-12-10 12:13:14.309373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:52.187 [2024-12-10 12:13:14.309379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:52.187 [2024-12-10 12:13:14.309384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1433755 for offline analysis/debug. 00:04:52.187 [2024-12-10 12:13:14.309950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.447 12:13:14 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.447 12:13:14 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:52.447 12:13:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:04:52.447 12:13:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:04:52.447 12:13:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:52.447 12:13:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:52.447 12:13:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.447 12:13:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.447 12:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.447 ************************************ 00:04:52.447 START TEST rpc_integrity 00:04:52.447 ************************************ 00:04:52.447 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:52.447 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.447 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.447 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.447 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.447 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.447 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.447 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.447 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.447 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.447 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.706 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.706 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:52.706 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.706 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.706 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.706 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.706 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.706 { 00:04:52.706 "name": "Malloc0", 00:04:52.706 "aliases": [ 00:04:52.706 "7aec6a21-7141-4eeb-9c10-7551cdfabddd" 00:04:52.706 ], 00:04:52.706 "product_name": "Malloc disk", 00:04:52.706 "block_size": 512, 00:04:52.706 "num_blocks": 16384, 00:04:52.706 "uuid": "7aec6a21-7141-4eeb-9c10-7551cdfabddd", 00:04:52.706 "assigned_rate_limits": { 00:04:52.706 "rw_ios_per_sec": 0, 00:04:52.706 "rw_mbytes_per_sec": 0, 00:04:52.706 "r_mbytes_per_sec": 0, 00:04:52.706 "w_mbytes_per_sec": 0 00:04:52.706 }, 00:04:52.706 "claimed": false, 00:04:52.706 "zoned": false, 00:04:52.706 "supported_io_types": { 00:04:52.706 "read": true, 00:04:52.706 "write": true, 00:04:52.706 "unmap": true, 00:04:52.706 "flush": true, 00:04:52.706 "reset": true, 00:04:52.706 "nvme_admin": false, 00:04:52.706 "nvme_io": false, 00:04:52.706 "nvme_io_md": false, 00:04:52.706 "write_zeroes": true, 00:04:52.706 "zcopy": true, 00:04:52.706 "get_zone_info": false, 00:04:52.706 "zone_management": false, 00:04:52.706 "zone_append": false, 00:04:52.706 "compare": false, 00:04:52.706 "compare_and_write": false, 00:04:52.706 "abort": true, 00:04:52.706 "seek_hole": false, 00:04:52.706 "seek_data": false, 00:04:52.706 "copy": true, 00:04:52.706 "nvme_iov_md": false 00:04:52.706 }, 00:04:52.706 "memory_domains": [ 00:04:52.706 { 00:04:52.706 "dma_device_id": "system", 00:04:52.706 "dma_device_type": 1 00:04:52.706 }, 00:04:52.706 { 00:04:52.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.706 "dma_device_type": 2 00:04:52.706 } 00:04:52.706 ], 00:04:52.706 "driver_specific": {} 00:04:52.706 } 00:04:52.706 ]' 00:04:52.706 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.706 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.706 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:52.706 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.706 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.706 [2024-12-10 12:13:14.686021] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:52.706 [2024-12-10 12:13:14.686050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.706 [2024-12-10 12:13:14.686063] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c4f140 00:04:52.706 [2024-12-10 12:13:14.686069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.706 [2024-12-10 12:13:14.687169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.706 [2024-12-10 12:13:14.687190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.706 Passthru0 00:04:52.706 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.706 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.707 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.707 { 00:04:52.707 "name": "Malloc0", 00:04:52.707 "aliases": [ 00:04:52.707 "7aec6a21-7141-4eeb-9c10-7551cdfabddd" 00:04:52.707 ], 00:04:52.707 "product_name": "Malloc disk", 00:04:52.707 "block_size": 512, 00:04:52.707 "num_blocks": 16384, 00:04:52.707 "uuid": "7aec6a21-7141-4eeb-9c10-7551cdfabddd", 00:04:52.707 "assigned_rate_limits": { 00:04:52.707 "rw_ios_per_sec": 0, 00:04:52.707 "rw_mbytes_per_sec": 0, 00:04:52.707 "r_mbytes_per_sec": 0, 00:04:52.707 "w_mbytes_per_sec": 0 00:04:52.707 }, 00:04:52.707 "claimed": true, 00:04:52.707 "claim_type": "exclusive_write", 00:04:52.707 "zoned": false, 00:04:52.707 "supported_io_types": { 00:04:52.707 "read": true, 00:04:52.707 "write": true, 00:04:52.707 "unmap": true, 00:04:52.707 "flush": true, 00:04:52.707 "reset": true, 00:04:52.707 "nvme_admin": false, 00:04:52.707 "nvme_io": false, 00:04:52.707 "nvme_io_md": false, 00:04:52.707 "write_zeroes": true, 00:04:52.707 "zcopy": true, 00:04:52.707 "get_zone_info": false, 00:04:52.707 "zone_management": false, 00:04:52.707 "zone_append": false, 00:04:52.707 "compare": false, 00:04:52.707 "compare_and_write": false, 00:04:52.707 "abort": true, 00:04:52.707 "seek_hole": false, 00:04:52.707 "seek_data": false, 00:04:52.707 "copy": true, 00:04:52.707 "nvme_iov_md": false 00:04:52.707 }, 00:04:52.707 "memory_domains": [ 00:04:52.707 { 00:04:52.707 "dma_device_id": "system", 00:04:52.707 "dma_device_type": 1 00:04:52.707 }, 00:04:52.707 { 00:04:52.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.707 "dma_device_type": 2 00:04:52.707 } 00:04:52.707 ], 00:04:52.707 "driver_specific": {} 00:04:52.707 }, 00:04:52.707 { 00:04:52.707 "name": "Passthru0", 00:04:52.707 "aliases": [ 00:04:52.707 "9f8c1176-3005-5d1d-9f98-83e9fda485b3" 00:04:52.707 ], 00:04:52.707 "product_name": "passthru", 00:04:52.707 "block_size": 512, 00:04:52.707 "num_blocks": 16384, 00:04:52.707 "uuid": "9f8c1176-3005-5d1d-9f98-83e9fda485b3", 00:04:52.707 "assigned_rate_limits": { 00:04:52.707 "rw_ios_per_sec": 0, 00:04:52.707 "rw_mbytes_per_sec": 0, 00:04:52.707 "r_mbytes_per_sec": 0, 00:04:52.707 "w_mbytes_per_sec": 0 00:04:52.707 }, 00:04:52.707 "claimed": false, 00:04:52.707 "zoned": false, 00:04:52.707 "supported_io_types": { 00:04:52.707 "read": true, 00:04:52.707 "write": true, 00:04:52.707 "unmap": true, 00:04:52.707 "flush": true, 00:04:52.707 "reset": true, 00:04:52.707 "nvme_admin": false, 00:04:52.707 "nvme_io": false, 00:04:52.707 "nvme_io_md": false, 00:04:52.707 "write_zeroes": true, 00:04:52.707 "zcopy": true, 00:04:52.707 "get_zone_info": false, 00:04:52.707 "zone_management": false, 00:04:52.707 "zone_append": false, 00:04:52.707 "compare": false, 00:04:52.707 "compare_and_write": false, 00:04:52.707 "abort": true, 00:04:52.707 "seek_hole": false, 00:04:52.707 "seek_data": false, 00:04:52.707 "copy": true, 00:04:52.707 "nvme_iov_md": false 00:04:52.707 }, 00:04:52.707 "memory_domains": [ 00:04:52.707 { 00:04:52.707 "dma_device_id": "system", 00:04:52.707 "dma_device_type": 1 00:04:52.707 }, 00:04:52.707 { 00:04:52.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.707 "dma_device_type": 2 00:04:52.707 } 00:04:52.707 ], 00:04:52.707 "driver_specific": { 00:04:52.707 "passthru": { 00:04:52.707 "name": "Passthru0", 00:04:52.707 "base_bdev_name": "Malloc0" 00:04:52.707 } 00:04:52.707 } 00:04:52.707 } 00:04:52.707 ]' 00:04:52.707 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.707 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.707 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.707 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.707 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.707 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.707 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.707 12:13:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.707 00:04:52.707 real 0m0.274s 00:04:52.707 user 0m0.168s 00:04:52.707 sys 0m0.037s 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.707 12:13:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.707 ************************************ 00:04:52.707 END TEST rpc_integrity 00:04:52.707 ************************************ 00:04:52.707 12:13:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:52.707 12:13:14 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.707 12:13:14 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.707 12:13:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 ************************************ 00:04:52.966 START TEST rpc_plugins 00:04:52.966 ************************************ 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:52.966 12:13:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.966 12:13:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:52.966 12:13:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.966 12:13:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:52.966 { 00:04:52.966 "name": "Malloc1", 00:04:52.966 "aliases": [ 00:04:52.966 "3422eda9-d0bd-4881-ad9a-909a340c5567" 00:04:52.966 ], 00:04:52.966 "product_name": "Malloc disk", 00:04:52.966 "block_size": 4096, 00:04:52.966 "num_blocks": 256, 00:04:52.966 "uuid": "3422eda9-d0bd-4881-ad9a-909a340c5567", 00:04:52.966 "assigned_rate_limits": { 00:04:52.966 "rw_ios_per_sec": 0, 00:04:52.966 "rw_mbytes_per_sec": 0, 00:04:52.966 "r_mbytes_per_sec": 0, 00:04:52.966 "w_mbytes_per_sec": 0 00:04:52.966 }, 00:04:52.966 "claimed": false, 00:04:52.966 "zoned": false, 00:04:52.966 "supported_io_types": { 00:04:52.966 "read": true, 00:04:52.966 "write": true, 00:04:52.966 "unmap": true, 00:04:52.966 "flush": true, 00:04:52.966 "reset": true, 00:04:52.966 "nvme_admin": false, 00:04:52.966 "nvme_io": false, 00:04:52.966 "nvme_io_md": false, 00:04:52.966 "write_zeroes": true, 00:04:52.966 "zcopy": true, 00:04:52.966 "get_zone_info": false, 00:04:52.966 "zone_management": false, 00:04:52.966 "zone_append": false, 00:04:52.966 "compare": false, 00:04:52.966 "compare_and_write": false, 00:04:52.966 "abort": true, 00:04:52.966 "seek_hole": false, 00:04:52.966 "seek_data": false, 00:04:52.966 "copy": true, 00:04:52.966 "nvme_iov_md": false 00:04:52.966 }, 00:04:52.966 "memory_domains": [ 00:04:52.966 { 00:04:52.966 "dma_device_id": "system", 00:04:52.966 "dma_device_type": 1 00:04:52.966 }, 00:04:52.966 { 00:04:52.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.966 "dma_device_type": 2 00:04:52.966 } 00:04:52.966 ], 00:04:52.966 "driver_specific": {} 00:04:52.966 } 00:04:52.966 ]' 00:04:52.966 12:13:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:52.966 12:13:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:52.966 12:13:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.966 12:13:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.966 12:13:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 12:13:15 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.966 12:13:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:52.966 12:13:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:52.966 12:13:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:52.966 00:04:52.966 real 0m0.145s 00:04:52.966 user 0m0.088s 00:04:52.966 sys 0m0.020s 00:04:52.966 12:13:15 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.966 12:13:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 ************************************ 00:04:52.966 END TEST rpc_plugins 00:04:52.966 ************************************ 00:04:52.966 12:13:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:52.966 12:13:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.966 12:13:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.966 12:13:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.966 ************************************ 00:04:52.966 START TEST rpc_trace_cmd_test 00:04:52.966 ************************************ 00:04:52.966 12:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:52.966 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:52.967 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:52.967 12:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.967 12:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:53.225 12:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.225 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:53.225 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1433755", 00:04:53.225 "tpoint_group_mask": "0x8", 00:04:53.225 "iscsi_conn": { 00:04:53.225 "mask": "0x2", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "scsi": { 00:04:53.225 "mask": "0x4", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "bdev": { 00:04:53.225 "mask": "0x8", 00:04:53.225 "tpoint_mask": "0xffffffffffffffff" 00:04:53.225 }, 00:04:53.225 "nvmf_rdma": { 00:04:53.225 "mask": "0x10", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "nvmf_tcp": { 00:04:53.225 "mask": "0x20", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "ftl": { 00:04:53.225 "mask": "0x40", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "blobfs": { 00:04:53.225 "mask": "0x80", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "dsa": { 00:04:53.225 "mask": "0x200", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "thread": { 00:04:53.225 "mask": "0x400", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "nvme_pcie": { 00:04:53.225 "mask": "0x800", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "iaa": { 00:04:53.225 "mask": "0x1000", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "nvme_tcp": { 00:04:53.225 "mask": "0x2000", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "bdev_nvme": { 00:04:53.225 "mask": "0x4000", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "sock": { 00:04:53.225 "mask": "0x8000", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "blob": { 00:04:53.225 "mask": "0x10000", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "bdev_raid": { 00:04:53.225 "mask": "0x20000", 00:04:53.225 "tpoint_mask": "0x0" 00:04:53.225 }, 00:04:53.225 "scheduler": { 00:04:53.225 "mask": "0x40000", 00:04:53.226 "tpoint_mask": "0x0" 00:04:53.226 } 00:04:53.226 }' 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:53.226 00:04:53.226 real 0m0.214s 00:04:53.226 user 0m0.180s 00:04:53.226 sys 0m0.027s 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.226 12:13:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:53.226 ************************************ 00:04:53.226 END TEST rpc_trace_cmd_test 00:04:53.226 ************************************ 00:04:53.226 12:13:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:53.226 12:13:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:53.226 12:13:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:53.226 12:13:15 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.226 12:13:15 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.226 12:13:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.486 ************************************ 00:04:53.486 START TEST rpc_daemon_integrity 00:04:53.486 ************************************ 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:53.486 { 00:04:53.486 "name": "Malloc2", 00:04:53.486 "aliases": [ 00:04:53.486 "13639450-9c3c-4fed-9a66-8f2c8016b7e7" 00:04:53.486 ], 00:04:53.486 "product_name": "Malloc disk", 00:04:53.486 "block_size": 512, 00:04:53.486 "num_blocks": 16384, 00:04:53.486 "uuid": "13639450-9c3c-4fed-9a66-8f2c8016b7e7", 00:04:53.486 "assigned_rate_limits": { 00:04:53.486 "rw_ios_per_sec": 0, 00:04:53.486 "rw_mbytes_per_sec": 0, 00:04:53.486 "r_mbytes_per_sec": 0, 00:04:53.486 "w_mbytes_per_sec": 0 00:04:53.486 }, 00:04:53.486 "claimed": false, 00:04:53.486 "zoned": false, 00:04:53.486 "supported_io_types": { 00:04:53.486 "read": true, 00:04:53.486 "write": true, 00:04:53.486 "unmap": true, 00:04:53.486 "flush": true, 00:04:53.486 "reset": true, 00:04:53.486 "nvme_admin": false, 00:04:53.486 "nvme_io": false, 00:04:53.486 "nvme_io_md": false, 00:04:53.486 "write_zeroes": true, 00:04:53.486 "zcopy": true, 00:04:53.486 "get_zone_info": false, 00:04:53.486 "zone_management": false, 00:04:53.486 "zone_append": false, 00:04:53.486 "compare": false, 00:04:53.486 "compare_and_write": false, 00:04:53.486 "abort": true, 00:04:53.486 "seek_hole": false, 00:04:53.486 "seek_data": false, 00:04:53.486 "copy": true, 00:04:53.486 "nvme_iov_md": false 00:04:53.486 }, 00:04:53.486 "memory_domains": [ 00:04:53.486 { 00:04:53.486 "dma_device_id": "system", 00:04:53.486 "dma_device_type": 1 00:04:53.486 }, 00:04:53.486 { 00:04:53.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.486 "dma_device_type": 2 00:04:53.486 } 00:04:53.486 ], 00:04:53.486 "driver_specific": {} 00:04:53.486 } 00:04:53.486 ]' 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.486 [2024-12-10 12:13:15.536350] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:53.486 [2024-12-10 12:13:15.536376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:53.486 [2024-12-10 12:13:15.536389] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b0d490 00:04:53.486 [2024-12-10 12:13:15.536395] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:53.486 [2024-12-10 12:13:15.537377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:53.486 [2024-12-10 12:13:15.537396] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:53.486 Passthru0 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:53.486 { 00:04:53.486 "name": "Malloc2", 00:04:53.486 "aliases": [ 00:04:53.486 "13639450-9c3c-4fed-9a66-8f2c8016b7e7" 00:04:53.486 ], 00:04:53.486 "product_name": "Malloc disk", 00:04:53.486 "block_size": 512, 00:04:53.486 "num_blocks": 16384, 00:04:53.486 "uuid": "13639450-9c3c-4fed-9a66-8f2c8016b7e7", 00:04:53.486 "assigned_rate_limits": { 00:04:53.486 "rw_ios_per_sec": 0, 00:04:53.486 "rw_mbytes_per_sec": 0, 00:04:53.486 "r_mbytes_per_sec": 0, 00:04:53.486 "w_mbytes_per_sec": 0 00:04:53.486 }, 00:04:53.486 "claimed": true, 00:04:53.486 "claim_type": "exclusive_write", 00:04:53.486 "zoned": false, 00:04:53.486 "supported_io_types": { 00:04:53.486 "read": true, 00:04:53.486 "write": true, 00:04:53.486 "unmap": true, 00:04:53.486 "flush": true, 00:04:53.486 "reset": true, 00:04:53.486 "nvme_admin": false, 00:04:53.486 "nvme_io": false, 00:04:53.486 "nvme_io_md": false, 00:04:53.486 "write_zeroes": true, 00:04:53.486 "zcopy": true, 00:04:53.486 "get_zone_info": false, 00:04:53.486 "zone_management": false, 00:04:53.486 "zone_append": false, 00:04:53.486 "compare": false, 00:04:53.486 "compare_and_write": false, 00:04:53.486 "abort": true, 00:04:53.486 "seek_hole": false, 00:04:53.486 "seek_data": false, 00:04:53.486 "copy": true, 00:04:53.486 "nvme_iov_md": false 00:04:53.486 }, 00:04:53.486 "memory_domains": [ 00:04:53.486 { 00:04:53.486 "dma_device_id": "system", 00:04:53.486 "dma_device_type": 1 00:04:53.486 }, 00:04:53.486 { 00:04:53.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.486 "dma_device_type": 2 00:04:53.486 } 00:04:53.486 ], 00:04:53.486 "driver_specific": {} 00:04:53.486 }, 00:04:53.486 { 00:04:53.486 "name": "Passthru0", 00:04:53.486 "aliases": [ 00:04:53.486 "1921982b-464c-5f01-a3c1-3fcb9cb2ce25" 00:04:53.486 ], 00:04:53.486 "product_name": "passthru", 00:04:53.486 "block_size": 512, 00:04:53.486 "num_blocks": 16384, 00:04:53.486 "uuid": "1921982b-464c-5f01-a3c1-3fcb9cb2ce25", 00:04:53.486 "assigned_rate_limits": { 00:04:53.486 "rw_ios_per_sec": 0, 00:04:53.486 "rw_mbytes_per_sec": 0, 00:04:53.486 "r_mbytes_per_sec": 0, 00:04:53.486 "w_mbytes_per_sec": 0 00:04:53.486 }, 00:04:53.486 "claimed": false, 00:04:53.486 "zoned": false, 00:04:53.486 "supported_io_types": { 00:04:53.486 "read": true, 00:04:53.486 "write": true, 00:04:53.486 "unmap": true, 00:04:53.486 "flush": true, 00:04:53.486 "reset": true, 00:04:53.486 "nvme_admin": false, 00:04:53.486 "nvme_io": false, 00:04:53.486 "nvme_io_md": false, 00:04:53.486 "write_zeroes": true, 00:04:53.486 "zcopy": true, 00:04:53.486 "get_zone_info": false, 00:04:53.486 "zone_management": false, 00:04:53.486 "zone_append": false, 00:04:53.486 "compare": false, 00:04:53.486 "compare_and_write": false, 00:04:53.486 "abort": true, 00:04:53.486 "seek_hole": false, 00:04:53.486 "seek_data": false, 00:04:53.486 "copy": true, 00:04:53.486 "nvme_iov_md": false 00:04:53.486 }, 00:04:53.486 "memory_domains": [ 00:04:53.486 { 00:04:53.486 "dma_device_id": "system", 00:04:53.486 "dma_device_type": 1 00:04:53.486 }, 00:04:53.486 { 00:04:53.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.486 "dma_device_type": 2 00:04:53.486 } 00:04:53.486 ], 00:04:53.486 "driver_specific": { 00:04:53.486 "passthru": { 00:04:53.486 "name": "Passthru0", 00:04:53.486 "base_bdev_name": "Malloc2" 00:04:53.486 } 00:04:53.486 } 00:04:53.486 } 00:04:53.486 ]' 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:53.486 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:53.487 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:53.746 12:13:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.746 00:04:53.746 real 0m0.282s 00:04:53.746 user 0m0.177s 00:04:53.746 sys 0m0.037s 00:04:53.746 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.746 12:13:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:53.746 ************************************ 00:04:53.746 END TEST rpc_daemon_integrity 00:04:53.746 ************************************ 00:04:53.746 12:13:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:53.746 12:13:15 rpc -- rpc/rpc.sh@84 -- # killprocess 1433755 00:04:53.746 12:13:15 rpc -- common/autotest_common.sh@954 -- # '[' -z 1433755 ']' 00:04:53.746 12:13:15 rpc -- common/autotest_common.sh@958 -- # kill -0 1433755 00:04:53.746 12:13:15 rpc -- common/autotest_common.sh@959 -- # uname 00:04:53.746 12:13:15 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.746 12:13:15 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1433755 00:04:53.746 12:13:15 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.746 12:13:15 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.746 12:13:15 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1433755' 00:04:53.746 killing process with pid 1433755 00:04:53.746 12:13:15 rpc -- common/autotest_common.sh@973 -- # kill 1433755 00:04:53.746 12:13:15 rpc -- common/autotest_common.sh@978 -- # wait 1433755 00:04:54.005 00:04:54.005 real 0m2.104s 00:04:54.005 user 0m2.663s 00:04:54.005 sys 0m0.716s 00:04:54.005 12:13:16 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.005 12:13:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.005 ************************************ 00:04:54.005 END TEST rpc 00:04:54.005 ************************************ 00:04:54.005 12:13:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/skip_rpc.sh 00:04:54.005 12:13:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.005 12:13:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.005 12:13:16 -- common/autotest_common.sh@10 -- # set +x 00:04:54.005 ************************************ 00:04:54.005 START TEST skip_rpc 00:04:54.005 ************************************ 00:04:54.005 12:13:16 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/skip_rpc.sh 00:04:54.269 * Looking for test storage... 00:04:54.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.269 12:13:16 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:54.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.269 --rc genhtml_branch_coverage=1 00:04:54.269 --rc genhtml_function_coverage=1 00:04:54.269 --rc genhtml_legend=1 00:04:54.269 --rc geninfo_all_blocks=1 00:04:54.269 --rc geninfo_unexecuted_blocks=1 00:04:54.269 00:04:54.269 ' 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:54.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.269 --rc genhtml_branch_coverage=1 00:04:54.269 --rc genhtml_function_coverage=1 00:04:54.269 --rc genhtml_legend=1 00:04:54.269 --rc geninfo_all_blocks=1 00:04:54.269 --rc geninfo_unexecuted_blocks=1 00:04:54.269 00:04:54.269 ' 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:54.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.269 --rc genhtml_branch_coverage=1 00:04:54.269 --rc genhtml_function_coverage=1 00:04:54.269 --rc genhtml_legend=1 00:04:54.269 --rc geninfo_all_blocks=1 00:04:54.269 --rc geninfo_unexecuted_blocks=1 00:04:54.269 00:04:54.269 ' 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:54.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.269 --rc genhtml_branch_coverage=1 00:04:54.269 --rc genhtml_function_coverage=1 00:04:54.269 --rc genhtml_legend=1 00:04:54.269 --rc geninfo_all_blocks=1 00:04:54.269 --rc geninfo_unexecuted_blocks=1 00:04:54.269 00:04:54.269 ' 00:04:54.269 12:13:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:04:54.269 12:13:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:04:54.269 12:13:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.269 12:13:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.270 12:13:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.270 ************************************ 00:04:54.270 START TEST skip_rpc 00:04:54.270 ************************************ 00:04:54.270 12:13:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:54.270 12:13:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:54.270 12:13:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1434392 00:04:54.270 12:13:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.270 12:13:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:54.270 [2024-12-10 12:13:16.392534] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:04:54.270 [2024-12-10 12:13:16.392572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434392 ] 00:04:54.529 [2024-12-10 12:13:16.449769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.529 [2024-12-10 12:13:16.489168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1434392 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1434392 ']' 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1434392 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1434392 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1434392' 00:04:59.901 killing process with pid 1434392 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1434392 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1434392 00:04:59.901 00:04:59.901 real 0m5.365s 00:04:59.901 user 0m5.143s 00:04:59.901 sys 0m0.260s 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.901 12:13:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.901 ************************************ 00:04:59.901 END TEST skip_rpc 00:04:59.901 ************************************ 00:04:59.901 12:13:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:59.901 12:13:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.901 12:13:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.901 12:13:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.901 ************************************ 00:04:59.901 START TEST skip_rpc_with_json 00:04:59.901 ************************************ 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1435343 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1435343 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1435343 ']' 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.901 12:13:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.901 [2024-12-10 12:13:21.837838] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:04:59.901 [2024-12-10 12:13:21.837879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435343 ] 00:04:59.901 [2024-12-10 12:13:21.914500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.901 [2024-12-10 12:13:21.955763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.161 [2024-12-10 12:13:22.171606] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:00.161 request: 00:05:00.161 { 00:05:00.161 "trtype": "tcp", 00:05:00.161 "method": "nvmf_get_transports", 00:05:00.161 "req_id": 1 00:05:00.161 } 00:05:00.161 Got JSON-RPC error response 00:05:00.161 response: 00:05:00.161 { 00:05:00.161 "code": -19, 00:05:00.161 "message": "No such device" 00:05:00.161 } 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.161 [2024-12-10 12:13:22.183717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.161 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:00.421 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.421 12:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:05:00.421 { 00:05:00.421 "subsystems": [ 00:05:00.421 { 00:05:00.421 "subsystem": "fsdev", 00:05:00.421 "config": [ 00:05:00.421 { 00:05:00.421 "method": "fsdev_set_opts", 00:05:00.421 "params": { 00:05:00.421 "fsdev_io_pool_size": 65535, 00:05:00.421 "fsdev_io_cache_size": 256 00:05:00.421 } 00:05:00.421 } 00:05:00.421 ] 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "subsystem": "vfio_user_target", 00:05:00.421 "config": null 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "subsystem": "keyring", 00:05:00.421 "config": [] 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "subsystem": "iobuf", 00:05:00.421 "config": [ 00:05:00.421 { 00:05:00.421 "method": "iobuf_set_options", 00:05:00.421 "params": { 00:05:00.421 "small_pool_count": 8192, 00:05:00.421 "large_pool_count": 1024, 00:05:00.421 "small_bufsize": 8192, 00:05:00.421 "large_bufsize": 135168, 00:05:00.421 "enable_numa": false 00:05:00.421 } 00:05:00.421 } 00:05:00.421 ] 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "subsystem": "sock", 00:05:00.421 "config": [ 00:05:00.421 { 00:05:00.421 "method": "sock_set_default_impl", 00:05:00.421 "params": { 00:05:00.421 "impl_name": "posix" 00:05:00.421 } 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "method": "sock_impl_set_options", 00:05:00.421 "params": { 00:05:00.421 "impl_name": "ssl", 00:05:00.421 "recv_buf_size": 4096, 00:05:00.421 "send_buf_size": 4096, 00:05:00.421 "enable_recv_pipe": true, 00:05:00.421 "enable_quickack": false, 00:05:00.421 "enable_placement_id": 0, 00:05:00.421 "enable_zerocopy_send_server": true, 00:05:00.421 "enable_zerocopy_send_client": false, 00:05:00.421 "zerocopy_threshold": 0, 00:05:00.421 "tls_version": 0, 00:05:00.421 "enable_ktls": false 00:05:00.421 } 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "method": "sock_impl_set_options", 00:05:00.421 "params": { 00:05:00.421 "impl_name": "posix", 00:05:00.421 "recv_buf_size": 2097152, 00:05:00.421 "send_buf_size": 2097152, 00:05:00.421 "enable_recv_pipe": true, 00:05:00.421 "enable_quickack": false, 00:05:00.421 "enable_placement_id": 0, 00:05:00.421 "enable_zerocopy_send_server": true, 00:05:00.421 "enable_zerocopy_send_client": false, 00:05:00.421 "zerocopy_threshold": 0, 00:05:00.421 "tls_version": 0, 00:05:00.421 "enable_ktls": false 00:05:00.421 } 00:05:00.421 } 00:05:00.421 ] 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "subsystem": "vmd", 00:05:00.421 "config": [] 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "subsystem": "accel", 00:05:00.421 "config": [ 00:05:00.421 { 00:05:00.421 "method": "accel_set_options", 00:05:00.421 "params": { 00:05:00.421 "small_cache_size": 128, 00:05:00.421 "large_cache_size": 16, 00:05:00.421 "task_count": 2048, 00:05:00.421 "sequence_count": 2048, 00:05:00.421 "buf_count": 2048 00:05:00.421 } 00:05:00.421 } 00:05:00.421 ] 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "subsystem": "bdev", 00:05:00.421 "config": [ 00:05:00.421 { 00:05:00.421 "method": "bdev_set_options", 00:05:00.421 "params": { 00:05:00.421 "bdev_io_pool_size": 65535, 00:05:00.421 "bdev_io_cache_size": 256, 00:05:00.421 "bdev_auto_examine": true, 00:05:00.421 "iobuf_small_cache_size": 128, 00:05:00.421 "iobuf_large_cache_size": 16 00:05:00.421 } 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "method": "bdev_raid_set_options", 00:05:00.421 "params": { 00:05:00.421 "process_window_size_kb": 1024, 00:05:00.421 "process_max_bandwidth_mb_sec": 0 00:05:00.421 } 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "method": "bdev_iscsi_set_options", 00:05:00.421 "params": { 00:05:00.421 "timeout_sec": 30 00:05:00.421 } 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "method": "bdev_nvme_set_options", 00:05:00.421 "params": { 00:05:00.421 "action_on_timeout": "none", 00:05:00.421 "timeout_us": 0, 00:05:00.421 "timeout_admin_us": 0, 00:05:00.421 "keep_alive_timeout_ms": 10000, 00:05:00.421 "arbitration_burst": 0, 00:05:00.421 "low_priority_weight": 0, 00:05:00.421 "medium_priority_weight": 0, 00:05:00.421 "high_priority_weight": 0, 00:05:00.421 "nvme_adminq_poll_period_us": 10000, 00:05:00.421 "nvme_ioq_poll_period_us": 0, 00:05:00.421 "io_queue_requests": 0, 00:05:00.421 "delay_cmd_submit": true, 00:05:00.421 "transport_retry_count": 4, 00:05:00.421 "bdev_retry_count": 3, 00:05:00.421 "transport_ack_timeout": 0, 00:05:00.421 "ctrlr_loss_timeout_sec": 0, 00:05:00.421 "reconnect_delay_sec": 0, 00:05:00.421 "fast_io_fail_timeout_sec": 0, 00:05:00.421 "disable_auto_failback": false, 00:05:00.421 "generate_uuids": false, 00:05:00.421 "transport_tos": 0, 00:05:00.421 "nvme_error_stat": false, 00:05:00.421 "rdma_srq_size": 0, 00:05:00.421 "io_path_stat": false, 00:05:00.421 "allow_accel_sequence": false, 00:05:00.421 "rdma_max_cq_size": 0, 00:05:00.421 "rdma_cm_event_timeout_ms": 0, 00:05:00.421 "dhchap_digests": [ 00:05:00.421 "sha256", 00:05:00.421 "sha384", 00:05:00.421 "sha512" 00:05:00.421 ], 00:05:00.421 "dhchap_dhgroups": [ 00:05:00.421 "null", 00:05:00.421 "ffdhe2048", 00:05:00.421 "ffdhe3072", 00:05:00.421 "ffdhe4096", 00:05:00.421 "ffdhe6144", 00:05:00.421 "ffdhe8192" 00:05:00.421 ] 00:05:00.421 } 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "method": "bdev_nvme_set_hotplug", 00:05:00.421 "params": { 00:05:00.421 "period_us": 100000, 00:05:00.421 "enable": false 00:05:00.421 } 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "method": "bdev_wait_for_examine" 00:05:00.421 } 00:05:00.421 ] 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "subsystem": "scsi", 00:05:00.421 "config": null 00:05:00.421 }, 00:05:00.421 { 00:05:00.421 "subsystem": "scheduler", 00:05:00.421 "config": [ 00:05:00.421 { 00:05:00.421 "method": "framework_set_scheduler", 00:05:00.421 "params": { 00:05:00.421 "name": "static" 00:05:00.421 } 00:05:00.421 } 00:05:00.421 ] 00:05:00.422 }, 00:05:00.422 { 00:05:00.422 "subsystem": "vhost_scsi", 00:05:00.422 "config": [] 00:05:00.422 }, 00:05:00.422 { 00:05:00.422 "subsystem": "vhost_blk", 00:05:00.422 "config": [] 00:05:00.422 }, 00:05:00.422 { 00:05:00.422 "subsystem": "ublk", 00:05:00.422 "config": [] 00:05:00.422 }, 00:05:00.422 { 00:05:00.422 "subsystem": "nbd", 00:05:00.422 "config": [] 00:05:00.422 }, 00:05:00.422 { 00:05:00.422 "subsystem": "nvmf", 00:05:00.422 "config": [ 00:05:00.422 { 00:05:00.422 "method": "nvmf_set_config", 00:05:00.422 "params": { 00:05:00.422 "discovery_filter": "match_any", 00:05:00.422 "admin_cmd_passthru": { 00:05:00.422 "identify_ctrlr": false 00:05:00.422 }, 00:05:00.422 "dhchap_digests": [ 00:05:00.422 "sha256", 00:05:00.422 "sha384", 00:05:00.422 "sha512" 00:05:00.422 ], 00:05:00.422 "dhchap_dhgroups": [ 00:05:00.422 "null", 00:05:00.422 "ffdhe2048", 00:05:00.422 "ffdhe3072", 00:05:00.422 "ffdhe4096", 00:05:00.422 "ffdhe6144", 00:05:00.422 "ffdhe8192" 00:05:00.422 ] 00:05:00.422 } 00:05:00.422 }, 00:05:00.422 { 00:05:00.422 "method": "nvmf_set_max_subsystems", 00:05:00.422 "params": { 00:05:00.422 "max_subsystems": 1024 00:05:00.422 } 00:05:00.422 }, 00:05:00.422 { 00:05:00.422 "method": "nvmf_set_crdt", 00:05:00.422 "params": { 00:05:00.422 "crdt1": 0, 00:05:00.422 "crdt2": 0, 00:05:00.422 "crdt3": 0 00:05:00.422 } 00:05:00.422 }, 00:05:00.422 { 00:05:00.422 "method": "nvmf_create_transport", 00:05:00.422 "params": { 00:05:00.422 "trtype": "TCP", 00:05:00.422 "max_queue_depth": 128, 00:05:00.422 "max_io_qpairs_per_ctrlr": 127, 00:05:00.422 "in_capsule_data_size": 4096, 00:05:00.422 "max_io_size": 131072, 00:05:00.422 "io_unit_size": 131072, 00:05:00.422 "max_aq_depth": 128, 00:05:00.422 "num_shared_buffers": 511, 00:05:00.422 "buf_cache_size": 4294967295, 00:05:00.422 "dif_insert_or_strip": false, 00:05:00.422 "zcopy": false, 00:05:00.422 "c2h_success": true, 00:05:00.422 "sock_priority": 0, 00:05:00.422 "abort_timeout_sec": 1, 00:05:00.422 "ack_timeout": 0, 00:05:00.422 "data_wr_pool_size": 0 00:05:00.422 } 00:05:00.422 } 00:05:00.422 ] 00:05:00.422 }, 00:05:00.422 { 00:05:00.422 "subsystem": "iscsi", 00:05:00.422 "config": [ 00:05:00.422 { 00:05:00.422 "method": "iscsi_set_options", 00:05:00.422 "params": { 00:05:00.422 "node_base": "iqn.2016-06.io.spdk", 00:05:00.422 "max_sessions": 128, 00:05:00.422 "max_connections_per_session": 2, 00:05:00.422 "max_queue_depth": 64, 00:05:00.422 "default_time2wait": 2, 00:05:00.422 "default_time2retain": 20, 00:05:00.422 "first_burst_length": 8192, 00:05:00.422 "immediate_data": true, 00:05:00.422 "allow_duplicated_isid": false, 00:05:00.422 "error_recovery_level": 0, 00:05:00.422 "nop_timeout": 60, 00:05:00.422 "nop_in_interval": 30, 00:05:00.422 "disable_chap": false, 00:05:00.422 "require_chap": false, 00:05:00.422 "mutual_chap": false, 00:05:00.422 "chap_group": 0, 00:05:00.422 "max_large_datain_per_connection": 64, 00:05:00.422 "max_r2t_per_connection": 4, 00:05:00.422 "pdu_pool_size": 36864, 00:05:00.422 "immediate_data_pool_size": 16384, 00:05:00.422 "data_out_pool_size": 2048 00:05:00.422 } 00:05:00.422 } 00:05:00.422 ] 00:05:00.422 } 00:05:00.422 ] 00:05:00.422 } 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1435343 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1435343 ']' 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1435343 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1435343 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1435343' 00:05:00.422 killing process with pid 1435343 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1435343 00:05:00.422 12:13:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1435343 00:05:00.681 12:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1435381 00:05:00.681 12:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:05:00.681 12:13:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1435381 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1435381 ']' 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1435381 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1435381 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1435381' 00:05:05.953 killing process with pid 1435381 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1435381 00:05:05.953 12:13:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1435381 00:05:05.953 12:13:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:05:05.953 12:13:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:05:05.953 00:05:05.953 real 0m6.282s 00:05:05.953 user 0m5.950s 00:05:05.953 sys 0m0.629s 00:05:05.953 12:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.953 12:13:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.953 ************************************ 00:05:05.953 END TEST skip_rpc_with_json 00:05:05.953 ************************************ 00:05:05.953 12:13:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:05.953 12:13:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.953 12:13:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.953 12:13:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.213 ************************************ 00:05:06.213 START TEST skip_rpc_with_delay 00:05:06.213 ************************************ 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt ]] 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:06.213 [2024-12-10 12:13:28.190914] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.213 00:05:06.213 real 0m0.071s 00:05:06.213 user 0m0.042s 00:05:06.213 sys 0m0.028s 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.213 12:13:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:06.213 ************************************ 00:05:06.213 END TEST skip_rpc_with_delay 00:05:06.213 ************************************ 00:05:06.213 12:13:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:06.213 12:13:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:06.213 12:13:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:06.213 12:13:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.213 12:13:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.213 12:13:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.213 ************************************ 00:05:06.213 START TEST exit_on_failed_rpc_init 00:05:06.213 ************************************ 00:05:06.213 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:06.213 12:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1436421 00:05:06.213 12:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1436421 00:05:06.213 12:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.213 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1436421 ']' 00:05:06.213 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.213 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.213 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.213 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.213 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.213 [2024-12-10 12:13:28.326544] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:06.213 [2024-12-10 12:13:28.326585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436421 ] 00:05:06.472 [2024-12-10 12:13:28.403485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.472 [2024-12-10 12:13:28.445048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.731 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.731 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:06.731 12:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.731 12:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.731 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:06.731 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.731 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:06.731 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.731 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:06.731 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt ]] 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:05:06.732 [2024-12-10 12:13:28.720206] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:06.732 [2024-12-10 12:13:28.720254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436560 ] 00:05:06.732 [2024-12-10 12:13:28.796269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.732 [2024-12-10 12:13:28.836203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.732 [2024-12-10 12:13:28.836258] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:06.732 [2024-12-10 12:13:28.836268] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:06.732 [2024-12-10 12:13:28.836275] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1436421 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1436421 ']' 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1436421 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.732 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1436421 00:05:06.991 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.991 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.991 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1436421' 00:05:06.991 killing process with pid 1436421 00:05:06.991 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1436421 00:05:06.991 12:13:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1436421 00:05:07.250 00:05:07.250 real 0m0.955s 00:05:07.250 user 0m1.022s 00:05:07.250 sys 0m0.392s 00:05:07.250 12:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.250 12:13:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.250 ************************************ 00:05:07.250 END TEST exit_on_failed_rpc_init 00:05:07.250 ************************************ 00:05:07.250 12:13:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:05:07.250 00:05:07.250 real 0m13.127s 00:05:07.250 user 0m12.370s 00:05:07.250 sys 0m1.581s 00:05:07.250 12:13:29 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.250 12:13:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.250 ************************************ 00:05:07.250 END TEST skip_rpc 00:05:07.250 ************************************ 00:05:07.250 12:13:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client.sh 00:05:07.250 12:13:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.250 12:13:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.250 12:13:29 -- common/autotest_common.sh@10 -- # set +x 00:05:07.250 ************************************ 00:05:07.250 START TEST rpc_client 00:05:07.250 ************************************ 00:05:07.250 12:13:29 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client.sh 00:05:07.250 * Looking for test storage... 00:05:07.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client 00:05:07.510 12:13:29 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.510 12:13:29 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.510 12:13:29 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.510 12:13:29 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.510 12:13:29 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:07.510 12:13:29 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.510 12:13:29 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.510 --rc genhtml_branch_coverage=1 00:05:07.510 --rc genhtml_function_coverage=1 00:05:07.510 --rc genhtml_legend=1 00:05:07.510 --rc geninfo_all_blocks=1 00:05:07.510 --rc geninfo_unexecuted_blocks=1 00:05:07.510 00:05:07.510 ' 00:05:07.510 12:13:29 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.510 --rc genhtml_branch_coverage=1 00:05:07.510 --rc genhtml_function_coverage=1 00:05:07.510 --rc genhtml_legend=1 00:05:07.510 --rc geninfo_all_blocks=1 00:05:07.510 --rc geninfo_unexecuted_blocks=1 00:05:07.510 00:05:07.510 ' 00:05:07.511 12:13:29 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.511 --rc genhtml_branch_coverage=1 00:05:07.511 --rc genhtml_function_coverage=1 00:05:07.511 --rc genhtml_legend=1 00:05:07.511 --rc geninfo_all_blocks=1 00:05:07.511 --rc geninfo_unexecuted_blocks=1 00:05:07.511 00:05:07.511 ' 00:05:07.511 12:13:29 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.511 --rc genhtml_branch_coverage=1 00:05:07.511 --rc genhtml_function_coverage=1 00:05:07.511 --rc genhtml_legend=1 00:05:07.511 --rc geninfo_all_blocks=1 00:05:07.511 --rc geninfo_unexecuted_blocks=1 00:05:07.511 00:05:07.511 ' 00:05:07.511 12:13:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client_test 00:05:07.511 OK 00:05:07.511 12:13:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:07.511 00:05:07.511 real 0m0.199s 00:05:07.511 user 0m0.119s 00:05:07.511 sys 0m0.094s 00:05:07.511 12:13:29 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.511 12:13:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:07.511 ************************************ 00:05:07.511 END TEST rpc_client 00:05:07.511 ************************************ 00:05:07.511 12:13:29 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config.sh 00:05:07.511 12:13:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.511 12:13:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.511 12:13:29 -- common/autotest_common.sh@10 -- # set +x 00:05:07.511 ************************************ 00:05:07.511 START TEST json_config 00:05:07.511 ************************************ 00:05:07.511 12:13:29 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config.sh 00:05:07.511 12:13:29 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.511 12:13:29 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.511 12:13:29 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.771 12:13:29 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.771 12:13:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.771 12:13:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.771 12:13:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.771 12:13:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.771 12:13:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.771 12:13:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.771 12:13:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.771 12:13:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.771 12:13:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.771 12:13:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.771 12:13:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.771 12:13:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:07.771 12:13:29 json_config -- scripts/common.sh@345 -- # : 1 00:05:07.771 12:13:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.771 12:13:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.771 12:13:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:07.771 12:13:29 json_config -- scripts/common.sh@353 -- # local d=1 00:05:07.771 12:13:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.771 12:13:29 json_config -- scripts/common.sh@355 -- # echo 1 00:05:07.771 12:13:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.771 12:13:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:07.771 12:13:29 json_config -- scripts/common.sh@353 -- # local d=2 00:05:07.771 12:13:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.771 12:13:29 json_config -- scripts/common.sh@355 -- # echo 2 00:05:07.771 12:13:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.771 12:13:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.771 12:13:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.771 12:13:29 json_config -- scripts/common.sh@368 -- # return 0 00:05:07.771 12:13:29 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.771 12:13:29 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.771 --rc genhtml_branch_coverage=1 00:05:07.771 --rc genhtml_function_coverage=1 00:05:07.771 --rc genhtml_legend=1 00:05:07.771 --rc geninfo_all_blocks=1 00:05:07.771 --rc geninfo_unexecuted_blocks=1 00:05:07.771 00:05:07.771 ' 00:05:07.771 12:13:29 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.771 --rc genhtml_branch_coverage=1 00:05:07.771 --rc genhtml_function_coverage=1 00:05:07.771 --rc genhtml_legend=1 00:05:07.771 --rc geninfo_all_blocks=1 00:05:07.771 --rc geninfo_unexecuted_blocks=1 00:05:07.771 00:05:07.771 ' 00:05:07.771 12:13:29 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.771 --rc genhtml_branch_coverage=1 00:05:07.771 --rc genhtml_function_coverage=1 00:05:07.771 --rc genhtml_legend=1 00:05:07.771 --rc geninfo_all_blocks=1 00:05:07.771 --rc geninfo_unexecuted_blocks=1 00:05:07.771 00:05:07.771 ' 00:05:07.771 12:13:29 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.771 --rc genhtml_branch_coverage=1 00:05:07.771 --rc genhtml_function_coverage=1 00:05:07.771 --rc genhtml_legend=1 00:05:07.771 --rc geninfo_all_blocks=1 00:05:07.771 --rc geninfo_unexecuted_blocks=1 00:05:07.771 00:05:07.771 ' 00:05:07.771 12:13:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:05:07.771 12:13:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:07.771 12:13:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.771 12:13:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.771 12:13:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.771 12:13:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.771 12:13:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.771 12:13:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.771 12:13:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:05:07.772 12:13:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.772 12:13:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.772 12:13:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.772 12:13:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.772 12:13:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.772 12:13:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.772 12:13:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.772 12:13:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:07.772 12:13:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@51 -- # : 0 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.772 12:13:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/common.sh 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_initiator_config.json') 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:07.772 INFO: JSON configuration test init 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:07.772 12:13:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.772 12:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:07.772 12:13:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.772 12:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.772 12:13:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:07.772 12:13:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:07.772 12:13:29 json_config -- json_config/common.sh@10 -- # shift 00:05:07.772 12:13:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.772 12:13:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.772 12:13:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.772 12:13:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.772 12:13:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.772 12:13:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1436867 00:05:07.772 12:13:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.772 Waiting for target to run... 00:05:07.772 12:13:29 json_config -- json_config/common.sh@25 -- # waitforlisten 1436867 /var/tmp/spdk_tgt.sock 00:05:07.772 12:13:29 json_config -- common/autotest_common.sh@835 -- # '[' -z 1436867 ']' 00:05:07.772 12:13:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:07.772 12:13:29 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.772 12:13:29 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.772 12:13:29 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.772 12:13:29 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.772 12:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.772 [2024-12-10 12:13:29.852259] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:07.772 [2024-12-10 12:13:29.852310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1436867 ] 00:05:08.032 [2024-12-10 12:13:30.138881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.032 [2024-12-10 12:13:30.172957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.599 12:13:30 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.599 12:13:30 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:08.599 12:13:30 json_config -- json_config/common.sh@26 -- # echo '' 00:05:08.600 00:05:08.600 12:13:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:08.600 12:13:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:08.600 12:13:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.600 12:13:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.600 12:13:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:08.600 12:13:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:08.600 12:13:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.600 12:13:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.600 12:13:30 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:08.600 12:13:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:08.600 12:13:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:11.887 12:13:33 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:11.887 12:13:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:11.887 12:13:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.887 12:13:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.887 12:13:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:11.887 12:13:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:11.887 12:13:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:11.887 12:13:33 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:11.887 12:13:33 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:11.887 12:13:33 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:11.887 12:13:33 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:11.887 12:13:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:11.887 12:13:34 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:11.887 12:13:34 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:11.887 12:13:34 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:11.887 12:13:34 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:11.887 12:13:34 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:11.887 12:13:34 json_config -- json_config/json_config.sh@54 -- # sort 00:05:11.887 12:13:34 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:11.887 12:13:34 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:11.887 12:13:34 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:11.887 12:13:34 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:11.887 12:13:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.887 12:13:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:12.146 12:13:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.146 12:13:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:12.146 12:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:12.146 MallocForNvmf0 00:05:12.146 12:13:34 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:12.146 12:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:12.404 MallocForNvmf1 00:05:12.404 12:13:34 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:12.404 12:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:12.662 [2024-12-10 12:13:34.629131] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.662 12:13:34 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:12.662 12:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:12.920 12:13:34 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:12.920 12:13:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:12.920 12:13:35 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:12.920 12:13:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:13.178 12:13:35 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:13.178 12:13:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:13.437 [2024-12-10 12:13:35.431610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.437 12:13:35 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:13.437 12:13:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.437 12:13:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.437 12:13:35 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:13.437 12:13:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.437 12:13:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.437 12:13:35 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:13.437 12:13:35 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:13.437 12:13:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:13.696 MallocBdevForConfigChangeCheck 00:05:13.696 12:13:35 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:13.696 12:13:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.696 12:13:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.696 12:13:35 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:13.696 12:13:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.955 12:13:36 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:13.955 INFO: shutting down applications... 00:05:13.955 12:13:36 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:13.955 12:13:36 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:13.955 12:13:36 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:13.955 12:13:36 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:15.859 Calling clear_iscsi_subsystem 00:05:15.859 Calling clear_nvmf_subsystem 00:05:15.859 Calling clear_nbd_subsystem 00:05:15.859 Calling clear_ublk_subsystem 00:05:15.859 Calling clear_vhost_blk_subsystem 00:05:15.859 Calling clear_vhost_scsi_subsystem 00:05:15.859 Calling clear_bdev_subsystem 00:05:15.859 12:13:37 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py 00:05:15.859 12:13:37 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:15.859 12:13:37 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:15.859 12:13:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method check_empty 00:05:15.859 12:13:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.859 12:13:37 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:16.118 12:13:38 json_config -- json_config/json_config.sh@352 -- # break 00:05:16.118 12:13:38 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:16.118 12:13:38 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:16.118 12:13:38 json_config -- json_config/common.sh@31 -- # local app=target 00:05:16.118 12:13:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.118 12:13:38 json_config -- json_config/common.sh@35 -- # [[ -n 1436867 ]] 00:05:16.118 12:13:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1436867 00:05:16.118 12:13:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.118 12:13:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.118 12:13:38 json_config -- json_config/common.sh@41 -- # kill -0 1436867 00:05:16.118 12:13:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.687 12:13:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.687 12:13:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.687 12:13:38 json_config -- json_config/common.sh@41 -- # kill -0 1436867 00:05:16.687 12:13:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:16.687 12:13:38 json_config -- json_config/common.sh@43 -- # break 00:05:16.687 12:13:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:16.687 12:13:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:16.687 SPDK target shutdown done 00:05:16.687 12:13:38 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:16.687 INFO: relaunching applications... 00:05:16.687 12:13:38 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:16.687 12:13:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:16.687 12:13:38 json_config -- json_config/common.sh@10 -- # shift 00:05:16.687 12:13:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.687 12:13:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.687 12:13:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.687 12:13:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.687 12:13:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.687 12:13:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1438439 00:05:16.687 12:13:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.687 Waiting for target to run... 00:05:16.687 12:13:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:16.687 12:13:38 json_config -- json_config/common.sh@25 -- # waitforlisten 1438439 /var/tmp/spdk_tgt.sock 00:05:16.687 12:13:38 json_config -- common/autotest_common.sh@835 -- # '[' -z 1438439 ']' 00:05:16.687 12:13:38 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.687 12:13:38 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.687 12:13:38 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.687 12:13:38 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.687 12:13:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.687 [2024-12-10 12:13:38.635052] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:16.687 [2024-12-10 12:13:38.635107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438439 ] 00:05:16.946 [2024-12-10 12:13:39.084759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.205 [2024-12-10 12:13:39.140474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.493 [2024-12-10 12:13:42.174965] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.493 [2024-12-10 12:13:42.207317] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.751 12:13:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.751 12:13:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:20.751 12:13:42 json_config -- json_config/common.sh@26 -- # echo '' 00:05:20.751 00:05:20.751 12:13:42 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:20.751 12:13:42 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:20.751 INFO: Checking if target configuration is the same... 00:05:20.751 12:13:42 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:20.751 12:13:42 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:20.751 12:13:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.751 + '[' 2 -ne 2 ']' 00:05:20.751 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh 00:05:20.751 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/../.. 00:05:20.751 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:05:20.751 +++ basename /dev/fd/62 00:05:20.751 ++ mktemp /tmp/62.XXX 00:05:20.751 + tmp_file_1=/tmp/62.qqL 00:05:20.751 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:20.751 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.752 + tmp_file_2=/tmp/spdk_tgt_config.json.LcE 00:05:20.752 + ret=0 00:05:20.752 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:05:21.319 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:05:21.319 + diff -u /tmp/62.qqL /tmp/spdk_tgt_config.json.LcE 00:05:21.319 + echo 'INFO: JSON config files are the same' 00:05:21.319 INFO: JSON config files are the same 00:05:21.319 + rm /tmp/62.qqL /tmp/spdk_tgt_config.json.LcE 00:05:21.319 + exit 0 00:05:21.319 12:13:43 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:21.319 12:13:43 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:21.319 INFO: changing configuration and checking if this can be detected... 00:05:21.319 12:13:43 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:21.319 12:13:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:21.319 12:13:43 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:21.319 12:13:43 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:21.319 12:13:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.319 + '[' 2 -ne 2 ']' 00:05:21.319 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh 00:05:21.319 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/../.. 00:05:21.319 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:05:21.319 +++ basename /dev/fd/62 00:05:21.319 ++ mktemp /tmp/62.XXX 00:05:21.319 + tmp_file_1=/tmp/62.JUW 00:05:21.319 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:21.577 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:21.577 + tmp_file_2=/tmp/spdk_tgt_config.json.iiX 00:05:21.577 + ret=0 00:05:21.577 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:05:21.836 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:05:21.836 + diff -u /tmp/62.JUW /tmp/spdk_tgt_config.json.iiX 00:05:21.836 + ret=1 00:05:21.836 + echo '=== Start of file: /tmp/62.JUW ===' 00:05:21.836 + cat /tmp/62.JUW 00:05:21.836 + echo '=== End of file: /tmp/62.JUW ===' 00:05:21.836 + echo '' 00:05:21.836 + echo '=== Start of file: /tmp/spdk_tgt_config.json.iiX ===' 00:05:21.836 + cat /tmp/spdk_tgt_config.json.iiX 00:05:21.836 + echo '=== End of file: /tmp/spdk_tgt_config.json.iiX ===' 00:05:21.836 + echo '' 00:05:21.836 + rm /tmp/62.JUW /tmp/spdk_tgt_config.json.iiX 00:05:21.836 + exit 1 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:21.836 INFO: configuration change detected. 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@324 -- # [[ -n 1438439 ]] 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.836 12:13:43 json_config -- json_config/json_config.sh@330 -- # killprocess 1438439 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@954 -- # '[' -z 1438439 ']' 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@958 -- # kill -0 1438439 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@959 -- # uname 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1438439 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1438439' 00:05:21.836 killing process with pid 1438439 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@973 -- # kill 1438439 00:05:21.836 12:13:43 json_config -- common/autotest_common.sh@978 -- # wait 1438439 00:05:23.739 12:13:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:23.739 12:13:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:23.739 12:13:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.739 12:13:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.739 12:13:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:23.739 12:13:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:23.739 INFO: Success 00:05:23.739 00:05:23.739 real 0m15.919s 00:05:23.739 user 0m16.580s 00:05:23.739 sys 0m2.557s 00:05:23.739 12:13:45 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.739 12:13:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.739 ************************************ 00:05:23.739 END TEST json_config 00:05:23.739 ************************************ 00:05:23.739 12:13:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config_extra_key.sh 00:05:23.739 12:13:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.739 12:13:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.739 12:13:45 -- common/autotest_common.sh@10 -- # set +x 00:05:23.739 ************************************ 00:05:23.739 START TEST json_config_extra_key 00:05:23.739 ************************************ 00:05:23.739 12:13:45 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config_extra_key.sh 00:05:23.739 12:13:45 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.739 12:13:45 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.739 12:13:45 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.739 12:13:45 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.739 12:13:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.740 --rc genhtml_branch_coverage=1 00:05:23.740 --rc genhtml_function_coverage=1 00:05:23.740 --rc genhtml_legend=1 00:05:23.740 --rc geninfo_all_blocks=1 00:05:23.740 --rc geninfo_unexecuted_blocks=1 00:05:23.740 00:05:23.740 ' 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.740 --rc genhtml_branch_coverage=1 00:05:23.740 --rc genhtml_function_coverage=1 00:05:23.740 --rc genhtml_legend=1 00:05:23.740 --rc geninfo_all_blocks=1 00:05:23.740 --rc geninfo_unexecuted_blocks=1 00:05:23.740 00:05:23.740 ' 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.740 --rc genhtml_branch_coverage=1 00:05:23.740 --rc genhtml_function_coverage=1 00:05:23.740 --rc genhtml_legend=1 00:05:23.740 --rc geninfo_all_blocks=1 00:05:23.740 --rc geninfo_unexecuted_blocks=1 00:05:23.740 00:05:23.740 ' 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.740 --rc genhtml_branch_coverage=1 00:05:23.740 --rc genhtml_function_coverage=1 00:05:23.740 --rc genhtml_legend=1 00:05:23.740 --rc geninfo_all_blocks=1 00:05:23.740 --rc geninfo_unexecuted_blocks=1 00:05:23.740 00:05:23.740 ' 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.740 12:13:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.740 12:13:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.740 12:13:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.740 12:13:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.740 12:13:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:23.740 12:13:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.740 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.740 12:13:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/common.sh 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json') 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:23.740 INFO: launching applications... 00:05:23.740 12:13:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1439717 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.740 Waiting for target to run... 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1439717 /var/tmp/spdk_tgt.sock 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1439717 ']' 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.740 12:13:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.740 12:13:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.740 [2024-12-10 12:13:45.826763] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:23.740 [2024-12-10 12:13:45.826809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439717 ] 00:05:24.308 [2024-12-10 12:13:46.279872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.308 [2024-12-10 12:13:46.329017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.567 12:13:46 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.567 12:13:46 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:24.567 12:13:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:24.567 00:05:24.567 12:13:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:24.567 INFO: shutting down applications... 00:05:24.567 12:13:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:24.567 12:13:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:24.567 12:13:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.567 12:13:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1439717 ]] 00:05:24.567 12:13:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1439717 00:05:24.567 12:13:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.567 12:13:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.567 12:13:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1439717 00:05:24.567 12:13:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:25.135 12:13:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:25.135 12:13:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.135 12:13:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1439717 00:05:25.135 12:13:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:25.135 12:13:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:25.135 12:13:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:25.135 12:13:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:25.135 SPDK target shutdown done 00:05:25.135 12:13:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:25.136 Success 00:05:25.136 00:05:25.136 real 0m1.579s 00:05:25.136 user 0m1.195s 00:05:25.136 sys 0m0.574s 00:05:25.136 12:13:47 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.136 12:13:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:25.136 ************************************ 00:05:25.136 END TEST json_config_extra_key 00:05:25.136 ************************************ 00:05:25.136 12:13:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.136 12:13:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.136 12:13:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.136 12:13:47 -- common/autotest_common.sh@10 -- # set +x 00:05:25.136 ************************************ 00:05:25.136 START TEST alias_rpc 00:05:25.136 ************************************ 00:05:25.136 12:13:47 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:25.395 * Looking for test storage... 00:05:25.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.395 12:13:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.395 --rc genhtml_branch_coverage=1 00:05:25.395 --rc genhtml_function_coverage=1 00:05:25.395 --rc genhtml_legend=1 00:05:25.395 --rc geninfo_all_blocks=1 00:05:25.395 --rc geninfo_unexecuted_blocks=1 00:05:25.395 00:05:25.395 ' 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.395 --rc genhtml_branch_coverage=1 00:05:25.395 --rc genhtml_function_coverage=1 00:05:25.395 --rc genhtml_legend=1 00:05:25.395 --rc geninfo_all_blocks=1 00:05:25.395 --rc geninfo_unexecuted_blocks=1 00:05:25.395 00:05:25.395 ' 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.395 --rc genhtml_branch_coverage=1 00:05:25.395 --rc genhtml_function_coverage=1 00:05:25.395 --rc genhtml_legend=1 00:05:25.395 --rc geninfo_all_blocks=1 00:05:25.395 --rc geninfo_unexecuted_blocks=1 00:05:25.395 00:05:25.395 ' 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.395 --rc genhtml_branch_coverage=1 00:05:25.395 --rc genhtml_function_coverage=1 00:05:25.395 --rc genhtml_legend=1 00:05:25.395 --rc geninfo_all_blocks=1 00:05:25.395 --rc geninfo_unexecuted_blocks=1 00:05:25.395 00:05:25.395 ' 00:05:25.395 12:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:25.395 12:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1440051 00:05:25.395 12:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:25.395 12:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1440051 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1440051 ']' 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.395 12:13:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.395 [2024-12-10 12:13:47.468035] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:25.395 [2024-12-10 12:13:47.468087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440051 ] 00:05:25.395 [2024-12-10 12:13:47.543569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.654 [2024-12-10 12:13:47.585938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.654 12:13:47 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.654 12:13:47 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:25.654 12:13:47 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py load_config -i 00:05:25.913 12:13:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1440051 00:05:25.913 12:13:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1440051 ']' 00:05:25.913 12:13:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1440051 00:05:25.913 12:13:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:25.913 12:13:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.913 12:13:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1440051 00:05:25.913 12:13:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.913 12:13:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.913 12:13:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1440051' 00:05:25.913 killing process with pid 1440051 00:05:25.913 12:13:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 1440051 00:05:25.913 12:13:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 1440051 00:05:26.481 00:05:26.481 real 0m1.138s 00:05:26.481 user 0m1.151s 00:05:26.481 sys 0m0.423s 00:05:26.481 12:13:48 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.481 12:13:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.481 ************************************ 00:05:26.481 END TEST alias_rpc 00:05:26.481 ************************************ 00:05:26.482 12:13:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:26.482 12:13:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/tcp.sh 00:05:26.482 12:13:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.482 12:13:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.482 12:13:48 -- common/autotest_common.sh@10 -- # set +x 00:05:26.482 ************************************ 00:05:26.482 START TEST spdkcli_tcp 00:05:26.482 ************************************ 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/tcp.sh 00:05:26.482 * Looking for test storage... 00:05:26.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.482 12:13:48 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.482 --rc genhtml_branch_coverage=1 00:05:26.482 --rc genhtml_function_coverage=1 00:05:26.482 --rc genhtml_legend=1 00:05:26.482 --rc geninfo_all_blocks=1 00:05:26.482 --rc geninfo_unexecuted_blocks=1 00:05:26.482 00:05:26.482 ' 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.482 --rc genhtml_branch_coverage=1 00:05:26.482 --rc genhtml_function_coverage=1 00:05:26.482 --rc genhtml_legend=1 00:05:26.482 --rc geninfo_all_blocks=1 00:05:26.482 --rc geninfo_unexecuted_blocks=1 00:05:26.482 00:05:26.482 ' 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.482 --rc genhtml_branch_coverage=1 00:05:26.482 --rc genhtml_function_coverage=1 00:05:26.482 --rc genhtml_legend=1 00:05:26.482 --rc geninfo_all_blocks=1 00:05:26.482 --rc geninfo_unexecuted_blocks=1 00:05:26.482 00:05:26.482 ' 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.482 --rc genhtml_branch_coverage=1 00:05:26.482 --rc genhtml_function_coverage=1 00:05:26.482 --rc genhtml_legend=1 00:05:26.482 --rc geninfo_all_blocks=1 00:05:26.482 --rc geninfo_unexecuted_blocks=1 00:05:26.482 00:05:26.482 ' 00:05:26.482 12:13:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/common.sh 00:05:26.482 12:13:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py 00:05:26.482 12:13:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py 00:05:26.482 12:13:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:26.482 12:13:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:26.482 12:13:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:26.482 12:13:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.482 12:13:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1440303 00:05:26.482 12:13:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1440303 00:05:26.482 12:13:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1440303 ']' 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.482 12:13:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.742 [2024-12-10 12:13:48.676990] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:26.742 [2024-12-10 12:13:48.677039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440303 ] 00:05:26.742 [2024-12-10 12:13:48.751792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.742 [2024-12-10 12:13:48.794403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.742 [2024-12-10 12:13:48.794405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.000 12:13:49 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.000 12:13:49 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:27.000 12:13:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1440486 00:05:27.000 12:13:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:27.000 12:13:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:27.260 [ 00:05:27.260 "bdev_malloc_delete", 00:05:27.260 "bdev_malloc_create", 00:05:27.260 "bdev_null_resize", 00:05:27.260 "bdev_null_delete", 00:05:27.260 "bdev_null_create", 00:05:27.260 "bdev_nvme_cuse_unregister", 00:05:27.260 "bdev_nvme_cuse_register", 00:05:27.260 "bdev_opal_new_user", 00:05:27.260 "bdev_opal_set_lock_state", 00:05:27.260 "bdev_opal_delete", 00:05:27.260 "bdev_opal_get_info", 00:05:27.260 "bdev_opal_create", 00:05:27.260 "bdev_nvme_opal_revert", 00:05:27.260 "bdev_nvme_opal_init", 00:05:27.260 "bdev_nvme_send_cmd", 00:05:27.260 "bdev_nvme_set_keys", 00:05:27.260 "bdev_nvme_get_path_iostat", 00:05:27.260 "bdev_nvme_get_mdns_discovery_info", 00:05:27.260 "bdev_nvme_stop_mdns_discovery", 00:05:27.260 "bdev_nvme_start_mdns_discovery", 00:05:27.260 "bdev_nvme_set_multipath_policy", 00:05:27.260 "bdev_nvme_set_preferred_path", 00:05:27.260 "bdev_nvme_get_io_paths", 00:05:27.260 "bdev_nvme_remove_error_injection", 00:05:27.260 "bdev_nvme_add_error_injection", 00:05:27.260 "bdev_nvme_get_discovery_info", 00:05:27.260 "bdev_nvme_stop_discovery", 00:05:27.260 "bdev_nvme_start_discovery", 00:05:27.260 "bdev_nvme_get_controller_health_info", 00:05:27.260 "bdev_nvme_disable_controller", 00:05:27.260 "bdev_nvme_enable_controller", 00:05:27.260 "bdev_nvme_reset_controller", 00:05:27.260 "bdev_nvme_get_transport_statistics", 00:05:27.260 "bdev_nvme_apply_firmware", 00:05:27.260 "bdev_nvme_detach_controller", 00:05:27.260 "bdev_nvme_get_controllers", 00:05:27.260 "bdev_nvme_attach_controller", 00:05:27.260 "bdev_nvme_set_hotplug", 00:05:27.260 "bdev_nvme_set_options", 00:05:27.260 "bdev_passthru_delete", 00:05:27.260 "bdev_passthru_create", 00:05:27.260 "bdev_lvol_set_parent_bdev", 00:05:27.260 "bdev_lvol_set_parent", 00:05:27.260 "bdev_lvol_check_shallow_copy", 00:05:27.260 "bdev_lvol_start_shallow_copy", 00:05:27.260 "bdev_lvol_grow_lvstore", 00:05:27.260 "bdev_lvol_get_lvols", 00:05:27.260 "bdev_lvol_get_lvstores", 00:05:27.260 "bdev_lvol_delete", 00:05:27.260 "bdev_lvol_set_read_only", 00:05:27.260 "bdev_lvol_resize", 00:05:27.260 "bdev_lvol_decouple_parent", 00:05:27.260 "bdev_lvol_inflate", 00:05:27.260 "bdev_lvol_rename", 00:05:27.260 "bdev_lvol_clone_bdev", 00:05:27.260 "bdev_lvol_clone", 00:05:27.260 "bdev_lvol_snapshot", 00:05:27.260 "bdev_lvol_create", 00:05:27.260 "bdev_lvol_delete_lvstore", 00:05:27.260 "bdev_lvol_rename_lvstore", 00:05:27.260 "bdev_lvol_create_lvstore", 00:05:27.260 "bdev_raid_set_options", 00:05:27.260 "bdev_raid_remove_base_bdev", 00:05:27.260 "bdev_raid_add_base_bdev", 00:05:27.260 "bdev_raid_delete", 00:05:27.260 "bdev_raid_create", 00:05:27.260 "bdev_raid_get_bdevs", 00:05:27.260 "bdev_error_inject_error", 00:05:27.260 "bdev_error_delete", 00:05:27.260 "bdev_error_create", 00:05:27.260 "bdev_split_delete", 00:05:27.260 "bdev_split_create", 00:05:27.260 "bdev_delay_delete", 00:05:27.260 "bdev_delay_create", 00:05:27.260 "bdev_delay_update_latency", 00:05:27.260 "bdev_zone_block_delete", 00:05:27.260 "bdev_zone_block_create", 00:05:27.260 "blobfs_create", 00:05:27.260 "blobfs_detect", 00:05:27.260 "blobfs_set_cache_size", 00:05:27.260 "bdev_aio_delete", 00:05:27.260 "bdev_aio_rescan", 00:05:27.260 "bdev_aio_create", 00:05:27.260 "bdev_ftl_set_property", 00:05:27.260 "bdev_ftl_get_properties", 00:05:27.260 "bdev_ftl_get_stats", 00:05:27.260 "bdev_ftl_unmap", 00:05:27.260 "bdev_ftl_unload", 00:05:27.260 "bdev_ftl_delete", 00:05:27.260 "bdev_ftl_load", 00:05:27.260 "bdev_ftl_create", 00:05:27.260 "bdev_virtio_attach_controller", 00:05:27.260 "bdev_virtio_scsi_get_devices", 00:05:27.260 "bdev_virtio_detach_controller", 00:05:27.260 "bdev_virtio_blk_set_hotplug", 00:05:27.260 "bdev_iscsi_delete", 00:05:27.260 "bdev_iscsi_create", 00:05:27.260 "bdev_iscsi_set_options", 00:05:27.260 "accel_error_inject_error", 00:05:27.260 "ioat_scan_accel_module", 00:05:27.260 "dsa_scan_accel_module", 00:05:27.260 "iaa_scan_accel_module", 00:05:27.260 "vfu_virtio_create_fs_endpoint", 00:05:27.260 "vfu_virtio_create_scsi_endpoint", 00:05:27.260 "vfu_virtio_scsi_remove_target", 00:05:27.260 "vfu_virtio_scsi_add_target", 00:05:27.260 "vfu_virtio_create_blk_endpoint", 00:05:27.260 "vfu_virtio_delete_endpoint", 00:05:27.260 "keyring_file_remove_key", 00:05:27.260 "keyring_file_add_key", 00:05:27.260 "keyring_linux_set_options", 00:05:27.260 "fsdev_aio_delete", 00:05:27.260 "fsdev_aio_create", 00:05:27.260 "iscsi_get_histogram", 00:05:27.260 "iscsi_enable_histogram", 00:05:27.260 "iscsi_set_options", 00:05:27.260 "iscsi_get_auth_groups", 00:05:27.260 "iscsi_auth_group_remove_secret", 00:05:27.260 "iscsi_auth_group_add_secret", 00:05:27.260 "iscsi_delete_auth_group", 00:05:27.260 "iscsi_create_auth_group", 00:05:27.260 "iscsi_set_discovery_auth", 00:05:27.260 "iscsi_get_options", 00:05:27.260 "iscsi_target_node_request_logout", 00:05:27.260 "iscsi_target_node_set_redirect", 00:05:27.260 "iscsi_target_node_set_auth", 00:05:27.260 "iscsi_target_node_add_lun", 00:05:27.260 "iscsi_get_stats", 00:05:27.260 "iscsi_get_connections", 00:05:27.260 "iscsi_portal_group_set_auth", 00:05:27.260 "iscsi_start_portal_group", 00:05:27.260 "iscsi_delete_portal_group", 00:05:27.260 "iscsi_create_portal_group", 00:05:27.260 "iscsi_get_portal_groups", 00:05:27.260 "iscsi_delete_target_node", 00:05:27.260 "iscsi_target_node_remove_pg_ig_maps", 00:05:27.260 "iscsi_target_node_add_pg_ig_maps", 00:05:27.260 "iscsi_create_target_node", 00:05:27.260 "iscsi_get_target_nodes", 00:05:27.260 "iscsi_delete_initiator_group", 00:05:27.260 "iscsi_initiator_group_remove_initiators", 00:05:27.260 "iscsi_initiator_group_add_initiators", 00:05:27.260 "iscsi_create_initiator_group", 00:05:27.260 "iscsi_get_initiator_groups", 00:05:27.260 "nvmf_set_crdt", 00:05:27.260 "nvmf_set_config", 00:05:27.260 "nvmf_set_max_subsystems", 00:05:27.260 "nvmf_stop_mdns_prr", 00:05:27.260 "nvmf_publish_mdns_prr", 00:05:27.260 "nvmf_subsystem_get_listeners", 00:05:27.260 "nvmf_subsystem_get_qpairs", 00:05:27.260 "nvmf_subsystem_get_controllers", 00:05:27.260 "nvmf_get_stats", 00:05:27.260 "nvmf_get_transports", 00:05:27.260 "nvmf_create_transport", 00:05:27.260 "nvmf_get_targets", 00:05:27.260 "nvmf_delete_target", 00:05:27.260 "nvmf_create_target", 00:05:27.260 "nvmf_subsystem_allow_any_host", 00:05:27.260 "nvmf_subsystem_set_keys", 00:05:27.260 "nvmf_subsystem_remove_host", 00:05:27.260 "nvmf_subsystem_add_host", 00:05:27.260 "nvmf_ns_remove_host", 00:05:27.260 "nvmf_ns_add_host", 00:05:27.260 "nvmf_subsystem_remove_ns", 00:05:27.260 "nvmf_subsystem_set_ns_ana_group", 00:05:27.260 "nvmf_subsystem_add_ns", 00:05:27.260 "nvmf_subsystem_listener_set_ana_state", 00:05:27.260 "nvmf_discovery_get_referrals", 00:05:27.260 "nvmf_discovery_remove_referral", 00:05:27.260 "nvmf_discovery_add_referral", 00:05:27.260 "nvmf_subsystem_remove_listener", 00:05:27.260 "nvmf_subsystem_add_listener", 00:05:27.260 "nvmf_delete_subsystem", 00:05:27.260 "nvmf_create_subsystem", 00:05:27.260 "nvmf_get_subsystems", 00:05:27.260 "env_dpdk_get_mem_stats", 00:05:27.260 "nbd_get_disks", 00:05:27.260 "nbd_stop_disk", 00:05:27.260 "nbd_start_disk", 00:05:27.260 "ublk_recover_disk", 00:05:27.260 "ublk_get_disks", 00:05:27.260 "ublk_stop_disk", 00:05:27.260 "ublk_start_disk", 00:05:27.260 "ublk_destroy_target", 00:05:27.260 "ublk_create_target", 00:05:27.260 "virtio_blk_create_transport", 00:05:27.260 "virtio_blk_get_transports", 00:05:27.260 "vhost_controller_set_coalescing", 00:05:27.260 "vhost_get_controllers", 00:05:27.260 "vhost_delete_controller", 00:05:27.260 "vhost_create_blk_controller", 00:05:27.260 "vhost_scsi_controller_remove_target", 00:05:27.260 "vhost_scsi_controller_add_target", 00:05:27.260 "vhost_start_scsi_controller", 00:05:27.260 "vhost_create_scsi_controller", 00:05:27.260 "thread_set_cpumask", 00:05:27.260 "scheduler_set_options", 00:05:27.260 "framework_get_governor", 00:05:27.260 "framework_get_scheduler", 00:05:27.260 "framework_set_scheduler", 00:05:27.260 "framework_get_reactors", 00:05:27.260 "thread_get_io_channels", 00:05:27.260 "thread_get_pollers", 00:05:27.260 "thread_get_stats", 00:05:27.260 "framework_monitor_context_switch", 00:05:27.260 "spdk_kill_instance", 00:05:27.260 "log_enable_timestamps", 00:05:27.260 "log_get_flags", 00:05:27.260 "log_clear_flag", 00:05:27.260 "log_set_flag", 00:05:27.260 "log_get_level", 00:05:27.260 "log_set_level", 00:05:27.260 "log_get_print_level", 00:05:27.260 "log_set_print_level", 00:05:27.260 "framework_enable_cpumask_locks", 00:05:27.260 "framework_disable_cpumask_locks", 00:05:27.260 "framework_wait_init", 00:05:27.260 "framework_start_init", 00:05:27.260 "scsi_get_devices", 00:05:27.260 "bdev_get_histogram", 00:05:27.260 "bdev_enable_histogram", 00:05:27.260 "bdev_set_qos_limit", 00:05:27.260 "bdev_set_qd_sampling_period", 00:05:27.260 "bdev_get_bdevs", 00:05:27.260 "bdev_reset_iostat", 00:05:27.260 "bdev_get_iostat", 00:05:27.260 "bdev_examine", 00:05:27.260 "bdev_wait_for_examine", 00:05:27.260 "bdev_set_options", 00:05:27.260 "accel_get_stats", 00:05:27.260 "accel_set_options", 00:05:27.260 "accel_set_driver", 00:05:27.260 "accel_crypto_key_destroy", 00:05:27.260 "accel_crypto_keys_get", 00:05:27.260 "accel_crypto_key_create", 00:05:27.260 "accel_assign_opc", 00:05:27.260 "accel_get_module_info", 00:05:27.260 "accel_get_opc_assignments", 00:05:27.260 "vmd_rescan", 00:05:27.260 "vmd_remove_device", 00:05:27.260 "vmd_enable", 00:05:27.260 "sock_get_default_impl", 00:05:27.260 "sock_set_default_impl", 00:05:27.261 "sock_impl_set_options", 00:05:27.261 "sock_impl_get_options", 00:05:27.261 "iobuf_get_stats", 00:05:27.261 "iobuf_set_options", 00:05:27.261 "keyring_get_keys", 00:05:27.261 "vfu_tgt_set_base_path", 00:05:27.261 "framework_get_pci_devices", 00:05:27.261 "framework_get_config", 00:05:27.261 "framework_get_subsystems", 00:05:27.261 "fsdev_set_opts", 00:05:27.261 "fsdev_get_opts", 00:05:27.261 "trace_get_info", 00:05:27.261 "trace_get_tpoint_group_mask", 00:05:27.261 "trace_disable_tpoint_group", 00:05:27.261 "trace_enable_tpoint_group", 00:05:27.261 "trace_clear_tpoint_mask", 00:05:27.261 "trace_set_tpoint_mask", 00:05:27.261 "notify_get_notifications", 00:05:27.261 "notify_get_types", 00:05:27.261 "spdk_get_version", 00:05:27.261 "rpc_get_methods" 00:05:27.261 ] 00:05:27.261 12:13:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.261 12:13:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:27.261 12:13:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1440303 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1440303 ']' 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1440303 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1440303 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1440303' 00:05:27.261 killing process with pid 1440303 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1440303 00:05:27.261 12:13:49 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1440303 00:05:27.520 00:05:27.520 real 0m1.154s 00:05:27.520 user 0m1.950s 00:05:27.520 sys 0m0.436s 00:05:27.520 12:13:49 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.520 12:13:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.520 ************************************ 00:05:27.520 END TEST spdkcli_tcp 00:05:27.520 ************************************ 00:05:27.520 12:13:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.520 12:13:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.520 12:13:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.520 12:13:49 -- common/autotest_common.sh@10 -- # set +x 00:05:27.520 ************************************ 00:05:27.520 START TEST dpdk_mem_utility 00:05:27.520 ************************************ 00:05:27.520 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.780 * Looking for test storage... 00:05:27.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.780 12:13:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:27.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.780 --rc genhtml_branch_coverage=1 00:05:27.780 --rc genhtml_function_coverage=1 00:05:27.780 --rc genhtml_legend=1 00:05:27.780 --rc geninfo_all_blocks=1 00:05:27.780 --rc geninfo_unexecuted_blocks=1 00:05:27.780 00:05:27.780 ' 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:27.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.780 --rc genhtml_branch_coverage=1 00:05:27.780 --rc genhtml_function_coverage=1 00:05:27.780 --rc genhtml_legend=1 00:05:27.780 --rc geninfo_all_blocks=1 00:05:27.780 --rc geninfo_unexecuted_blocks=1 00:05:27.780 00:05:27.780 ' 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:27.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.780 --rc genhtml_branch_coverage=1 00:05:27.780 --rc genhtml_function_coverage=1 00:05:27.780 --rc genhtml_legend=1 00:05:27.780 --rc geninfo_all_blocks=1 00:05:27.780 --rc geninfo_unexecuted_blocks=1 00:05:27.780 00:05:27.780 ' 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:27.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.780 --rc genhtml_branch_coverage=1 00:05:27.780 --rc genhtml_function_coverage=1 00:05:27.780 --rc genhtml_legend=1 00:05:27.780 --rc geninfo_all_blocks=1 00:05:27.780 --rc geninfo_unexecuted_blocks=1 00:05:27.780 00:05:27.780 ' 00:05:27.780 12:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py 00:05:27.780 12:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1440605 00:05:27.780 12:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1440605 00:05:27.780 12:13:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1440605 ']' 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.780 12:13:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.780 [2024-12-10 12:13:49.885567] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:27.780 [2024-12-10 12:13:49.885613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440605 ] 00:05:28.039 [2024-12-10 12:13:49.961215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.039 [2024-12-10 12:13:50.000094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.299 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.299 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:28.299 12:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:28.299 12:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:28.299 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.299 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.299 { 00:05:28.299 "filename": "/tmp/spdk_mem_dump.txt" 00:05:28.299 } 00:05:28.299 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.299 12:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py 00:05:28.299 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:28.299 1 heaps totaling size 818.000000 MiB 00:05:28.299 size: 818.000000 MiB heap id: 0 00:05:28.299 end heaps---------- 00:05:28.299 9 mempools totaling size 603.782043 MiB 00:05:28.299 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:28.299 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:28.299 size: 100.555481 MiB name: bdev_io_1440605 00:05:28.299 size: 50.003479 MiB name: msgpool_1440605 00:05:28.299 size: 36.509338 MiB name: fsdev_io_1440605 00:05:28.299 size: 21.763794 MiB name: PDU_Pool 00:05:28.299 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:28.299 size: 4.133484 MiB name: evtpool_1440605 00:05:28.299 size: 0.026123 MiB name: Session_Pool 00:05:28.299 end mempools------- 00:05:28.299 6 memzones totaling size 4.142822 MiB 00:05:28.299 size: 1.000366 MiB name: RG_ring_0_1440605 00:05:28.299 size: 1.000366 MiB name: RG_ring_1_1440605 00:05:28.299 size: 1.000366 MiB name: RG_ring_4_1440605 00:05:28.299 size: 1.000366 MiB name: RG_ring_5_1440605 00:05:28.299 size: 0.125366 MiB name: RG_ring_2_1440605 00:05:28.299 size: 0.015991 MiB name: RG_ring_3_1440605 00:05:28.299 end memzones------- 00:05:28.299 12:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.299 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:28.299 list of free elements. size: 10.852478 MiB 00:05:28.299 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:28.299 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:28.299 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:28.300 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:28.300 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:28.300 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:28.300 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:28.300 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:28.300 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:28.300 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:28.300 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:28.300 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:28.300 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:28.300 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:28.300 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:28.300 list of standard malloc elements. size: 199.218628 MiB 00:05:28.300 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:28.300 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:28.300 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:28.300 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:28.300 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:28.300 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:28.300 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:28.300 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:28.300 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:28.300 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:28.300 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:28.300 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:28.300 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:28.300 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:28.300 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:28.300 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:28.300 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:28.300 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:28.300 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:28.300 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:28.300 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:28.300 list of memzone associated elements. size: 607.928894 MiB 00:05:28.300 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:28.300 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:28.300 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:28.300 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:28.300 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:28.300 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1440605_0 00:05:28.300 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:28.300 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1440605_0 00:05:28.300 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:28.300 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1440605_0 00:05:28.300 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:28.300 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:28.300 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:28.300 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:28.300 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:28.300 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1440605_0 00:05:28.300 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:28.300 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1440605 00:05:28.300 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:28.300 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1440605 00:05:28.300 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:28.300 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:28.300 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:28.300 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:28.300 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:28.300 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:28.300 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:28.300 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:28.300 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:28.300 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1440605 00:05:28.300 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:28.300 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1440605 00:05:28.300 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:28.300 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1440605 00:05:28.300 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:28.300 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1440605 00:05:28.300 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:28.300 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1440605 00:05:28.300 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:28.300 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1440605 00:05:28.300 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:28.300 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:28.300 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:28.300 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:28.300 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:28.300 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:28.300 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:28.300 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1440605 00:05:28.300 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:28.300 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1440605 00:05:28.300 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:28.300 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:28.300 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:28.300 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:28.300 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:28.300 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1440605 00:05:28.300 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:28.300 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:28.300 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:28.300 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1440605 00:05:28.300 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:28.300 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1440605 00:05:28.300 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:28.300 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1440605 00:05:28.300 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:28.300 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:28.300 12:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:28.300 12:13:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1440605 00:05:28.300 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1440605 ']' 00:05:28.300 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1440605 00:05:28.300 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:28.300 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.300 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1440605 00:05:28.300 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.300 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.300 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1440605' 00:05:28.300 killing process with pid 1440605 00:05:28.300 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1440605 00:05:28.300 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1440605 00:05:28.560 00:05:28.560 real 0m1.041s 00:05:28.560 user 0m0.984s 00:05:28.560 sys 0m0.407s 00:05:28.560 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.560 12:13:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.560 ************************************ 00:05:28.560 END TEST dpdk_mem_utility 00:05:28.560 ************************************ 00:05:28.819 12:13:50 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event.sh 00:05:28.819 12:13:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.819 12:13:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.819 12:13:50 -- common/autotest_common.sh@10 -- # set +x 00:05:28.819 ************************************ 00:05:28.819 START TEST event 00:05:28.819 ************************************ 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event.sh 00:05:28.819 * Looking for test storage... 00:05:28.819 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.819 12:13:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.819 12:13:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.819 12:13:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.819 12:13:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.819 12:13:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.819 12:13:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.819 12:13:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.819 12:13:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.819 12:13:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.819 12:13:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.819 12:13:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.819 12:13:50 event -- scripts/common.sh@344 -- # case "$op" in 00:05:28.819 12:13:50 event -- scripts/common.sh@345 -- # : 1 00:05:28.819 12:13:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.819 12:13:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.819 12:13:50 event -- scripts/common.sh@365 -- # decimal 1 00:05:28.819 12:13:50 event -- scripts/common.sh@353 -- # local d=1 00:05:28.819 12:13:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.819 12:13:50 event -- scripts/common.sh@355 -- # echo 1 00:05:28.819 12:13:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.819 12:13:50 event -- scripts/common.sh@366 -- # decimal 2 00:05:28.819 12:13:50 event -- scripts/common.sh@353 -- # local d=2 00:05:28.819 12:13:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.819 12:13:50 event -- scripts/common.sh@355 -- # echo 2 00:05:28.819 12:13:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.819 12:13:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.819 12:13:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.819 12:13:50 event -- scripts/common.sh@368 -- # return 0 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.819 --rc genhtml_branch_coverage=1 00:05:28.819 --rc genhtml_function_coverage=1 00:05:28.819 --rc genhtml_legend=1 00:05:28.819 --rc geninfo_all_blocks=1 00:05:28.819 --rc geninfo_unexecuted_blocks=1 00:05:28.819 00:05:28.819 ' 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.819 --rc genhtml_branch_coverage=1 00:05:28.819 --rc genhtml_function_coverage=1 00:05:28.819 --rc genhtml_legend=1 00:05:28.819 --rc geninfo_all_blocks=1 00:05:28.819 --rc geninfo_unexecuted_blocks=1 00:05:28.819 00:05:28.819 ' 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.819 --rc genhtml_branch_coverage=1 00:05:28.819 --rc genhtml_function_coverage=1 00:05:28.819 --rc genhtml_legend=1 00:05:28.819 --rc geninfo_all_blocks=1 00:05:28.819 --rc geninfo_unexecuted_blocks=1 00:05:28.819 00:05:28.819 ' 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.819 --rc genhtml_branch_coverage=1 00:05:28.819 --rc genhtml_function_coverage=1 00:05:28.819 --rc genhtml_legend=1 00:05:28.819 --rc geninfo_all_blocks=1 00:05:28.819 --rc geninfo_unexecuted_blocks=1 00:05:28.819 00:05:28.819 ' 00:05:28.819 12:13:50 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/nbd_common.sh 00:05:28.819 12:13:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.819 12:13:50 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:28.819 12:13:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.819 12:13:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.819 ************************************ 00:05:28.819 START TEST event_perf 00:05:28.819 ************************************ 00:05:28.819 12:13:50 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:29.078 Running I/O for 1 seconds...[2024-12-10 12:13:50.996910] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:29.078 [2024-12-10 12:13:50.996980] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1440895 ] 00:05:29.078 [2024-12-10 12:13:51.073823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.078 [2024-12-10 12:13:51.116637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.078 [2024-12-10 12:13:51.116748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.078 [2024-12-10 12:13:51.116854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.078 [2024-12-10 12:13:51.116854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.016 Running I/O for 1 seconds... 00:05:30.016 lcore 0: 205843 00:05:30.016 lcore 1: 205841 00:05:30.016 lcore 2: 205841 00:05:30.016 lcore 3: 205842 00:05:30.016 done. 00:05:30.016 00:05:30.016 real 0m1.181s 00:05:30.016 user 0m4.102s 00:05:30.016 sys 0m0.076s 00:05:30.016 12:13:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.016 12:13:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.016 ************************************ 00:05:30.016 END TEST event_perf 00:05:30.016 ************************************ 00:05:30.275 12:13:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor/reactor -t 1 00:05:30.275 12:13:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:30.275 12:13:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.275 12:13:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.275 ************************************ 00:05:30.275 START TEST event_reactor 00:05:30.275 ************************************ 00:05:30.275 12:13:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor/reactor -t 1 00:05:30.275 [2024-12-10 12:13:52.249778] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:30.275 [2024-12-10 12:13:52.249838] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441147 ] 00:05:30.275 [2024-12-10 12:13:52.327392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.275 [2024-12-10 12:13:52.366897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.652 test_start 00:05:31.652 oneshot 00:05:31.652 tick 100 00:05:31.652 tick 100 00:05:31.652 tick 250 00:05:31.652 tick 100 00:05:31.652 tick 100 00:05:31.652 tick 100 00:05:31.652 tick 250 00:05:31.652 tick 500 00:05:31.652 tick 100 00:05:31.652 tick 100 00:05:31.652 tick 250 00:05:31.652 tick 100 00:05:31.652 tick 100 00:05:31.652 test_end 00:05:31.652 00:05:31.652 real 0m1.176s 00:05:31.652 user 0m1.093s 00:05:31.652 sys 0m0.078s 00:05:31.652 12:13:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.652 12:13:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:31.652 ************************************ 00:05:31.652 END TEST event_reactor 00:05:31.652 ************************************ 00:05:31.652 12:13:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.652 12:13:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:31.652 12:13:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.652 12:13:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.652 ************************************ 00:05:31.652 START TEST event_reactor_perf 00:05:31.652 ************************************ 00:05:31.652 12:13:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.652 [2024-12-10 12:13:53.492153] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:31.652 [2024-12-10 12:13:53.492215] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441396 ] 00:05:31.652 [2024-12-10 12:13:53.568770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.652 [2024-12-10 12:13:53.610401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.589 test_start 00:05:32.589 test_end 00:05:32.589 Performance: 505626 events per second 00:05:32.589 00:05:32.589 real 0m1.173s 00:05:32.589 user 0m1.097s 00:05:32.589 sys 0m0.072s 00:05:32.589 12:13:54 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.589 12:13:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.589 ************************************ 00:05:32.589 END TEST event_reactor_perf 00:05:32.589 ************************************ 00:05:32.589 12:13:54 event -- event/event.sh@49 -- # uname -s 00:05:32.589 12:13:54 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.589 12:13:54 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler.sh 00:05:32.589 12:13:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.589 12:13:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.589 12:13:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.589 ************************************ 00:05:32.589 START TEST event_scheduler 00:05:32.589 ************************************ 00:05:32.589 12:13:54 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler.sh 00:05:32.849 * Looking for test storage... 00:05:32.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.849 12:13:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.849 --rc genhtml_branch_coverage=1 00:05:32.849 --rc genhtml_function_coverage=1 00:05:32.849 --rc genhtml_legend=1 00:05:32.849 --rc geninfo_all_blocks=1 00:05:32.849 --rc geninfo_unexecuted_blocks=1 00:05:32.849 00:05:32.849 ' 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.849 --rc genhtml_branch_coverage=1 00:05:32.849 --rc genhtml_function_coverage=1 00:05:32.849 --rc genhtml_legend=1 00:05:32.849 --rc geninfo_all_blocks=1 00:05:32.849 --rc geninfo_unexecuted_blocks=1 00:05:32.849 00:05:32.849 ' 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.849 --rc genhtml_branch_coverage=1 00:05:32.849 --rc genhtml_function_coverage=1 00:05:32.849 --rc genhtml_legend=1 00:05:32.849 --rc geninfo_all_blocks=1 00:05:32.849 --rc geninfo_unexecuted_blocks=1 00:05:32.849 00:05:32.849 ' 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.849 --rc genhtml_branch_coverage=1 00:05:32.849 --rc genhtml_function_coverage=1 00:05:32.849 --rc genhtml_legend=1 00:05:32.849 --rc geninfo_all_blocks=1 00:05:32.849 --rc geninfo_unexecuted_blocks=1 00:05:32.849 00:05:32.849 ' 00:05:32.849 12:13:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.849 12:13:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1441684 00:05:32.849 12:13:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.849 12:13:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.849 12:13:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1441684 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1441684 ']' 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.849 12:13:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.849 [2024-12-10 12:13:54.940751] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:32.849 [2024-12-10 12:13:54.940799] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441684 ] 00:05:32.849 [2024-12-10 12:13:55.015792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.109 [2024-12-10 12:13:55.057927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.109 [2024-12-10 12:13:55.058040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.109 [2024-12-10 12:13:55.058126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.109 [2024-12-10 12:13:55.058127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:33.109 12:13:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.109 [2024-12-10 12:13:55.106757] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:33.109 [2024-12-10 12:13:55.106773] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:33.109 [2024-12-10 12:13:55.106783] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:33.109 [2024-12-10 12:13:55.106789] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:33.109 [2024-12-10 12:13:55.106794] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.109 12:13:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.109 [2024-12-10 12:13:55.181719] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.109 12:13:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.109 12:13:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.109 ************************************ 00:05:33.109 START TEST scheduler_create_thread 00:05:33.109 ************************************ 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.109 2 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.109 3 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.109 4 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.109 5 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.109 6 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.109 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.368 7 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.368 8 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.368 9 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.368 10 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.368 12:13:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.304 12:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.304 12:13:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:34.304 12:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.304 12:13:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.680 12:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.680 12:13:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.680 12:13:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.680 12:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.680 12:13:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.616 12:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.616 00:05:36.616 real 0m3.383s 00:05:36.616 user 0m0.026s 00:05:36.616 sys 0m0.004s 00:05:36.616 12:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.616 12:13:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.616 ************************************ 00:05:36.616 END TEST scheduler_create_thread 00:05:36.616 ************************************ 00:05:36.616 12:13:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.616 12:13:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1441684 00:05:36.616 12:13:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1441684 ']' 00:05:36.616 12:13:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1441684 00:05:36.616 12:13:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:36.616 12:13:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.616 12:13:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1441684 00:05:36.616 12:13:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:36.616 12:13:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:36.616 12:13:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1441684' 00:05:36.616 killing process with pid 1441684 00:05:36.616 12:13:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1441684 00:05:36.616 12:13:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1441684 00:05:36.875 [2024-12-10 12:13:58.981764] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:37.134 00:05:37.134 real 0m4.469s 00:05:37.134 user 0m7.831s 00:05:37.134 sys 0m0.383s 00:05:37.134 12:13:59 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.134 12:13:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.134 ************************************ 00:05:37.134 END TEST event_scheduler 00:05:37.134 ************************************ 00:05:37.134 12:13:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:37.134 12:13:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:37.134 12:13:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.134 12:13:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.134 12:13:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.134 ************************************ 00:05:37.134 START TEST app_repeat 00:05:37.134 ************************************ 00:05:37.134 12:13:59 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1442427 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1442427' 00:05:37.134 Process app_repeat pid: 1442427 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:37.134 spdk_app_start Round 0 00:05:37.134 12:13:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1442427 /var/tmp/spdk-nbd.sock 00:05:37.134 12:13:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1442427 ']' 00:05:37.134 12:13:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.134 12:13:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.134 12:13:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.134 12:13:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.134 12:13:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.393 [2024-12-10 12:13:59.302942] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:37.393 [2024-12-10 12:13:59.302998] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1442427 ] 00:05:37.393 [2024-12-10 12:13:59.380418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.393 [2024-12-10 12:13:59.420828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.393 [2024-12-10 12:13:59.420829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.393 12:13:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.393 12:13:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.393 12:13:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.651 Malloc0 00:05:37.651 12:13:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.909 Malloc1 00:05:37.909 12:13:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.909 12:13:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.909 12:13:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.909 12:13:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.909 12:13:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.909 12:13:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.909 12:13:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.910 12:13:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.910 12:13:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.910 12:13:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.910 12:13:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.910 12:13:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.910 12:13:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.910 12:13:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.910 12:13:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.910 12:13:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.168 /dev/nbd0 00:05:38.168 12:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.168 12:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.168 12:14:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.169 1+0 records in 00:05:38.169 1+0 records out 00:05:38.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185508 s, 22.1 MB/s 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.169 12:14:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.169 12:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.169 12:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.169 12:14:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.427 /dev/nbd1 00:05:38.427 12:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.427 12:14:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.427 12:14:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:38.427 12:14:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.427 12:14:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.427 12:14:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.427 12:14:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:38.427 12:14:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.427 12:14:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.428 12:14:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.428 12:14:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.428 1+0 records in 00:05:38.428 1+0 records out 00:05:38.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228833 s, 17.9 MB/s 00:05:38.428 12:14:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:38.428 12:14:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.428 12:14:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:38.428 12:14:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.428 12:14:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.428 12:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.428 12:14:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.428 12:14:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.428 12:14:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.428 12:14:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.687 { 00:05:38.687 "nbd_device": "/dev/nbd0", 00:05:38.687 "bdev_name": "Malloc0" 00:05:38.687 }, 00:05:38.687 { 00:05:38.687 "nbd_device": "/dev/nbd1", 00:05:38.687 "bdev_name": "Malloc1" 00:05:38.687 } 00:05:38.687 ]' 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.687 { 00:05:38.687 "nbd_device": "/dev/nbd0", 00:05:38.687 "bdev_name": "Malloc0" 00:05:38.687 }, 00:05:38.687 { 00:05:38.687 "nbd_device": "/dev/nbd1", 00:05:38.687 "bdev_name": "Malloc1" 00:05:38.687 } 00:05:38.687 ]' 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.687 /dev/nbd1' 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.687 /dev/nbd1' 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.687 256+0 records in 00:05:38.687 256+0 records out 00:05:38.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100356 s, 104 MB/s 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.687 256+0 records in 00:05:38.687 256+0 records out 00:05:38.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146729 s, 71.5 MB/s 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.687 256+0 records in 00:05:38.687 256+0 records out 00:05:38.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016013 s, 65.5 MB/s 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.687 12:14:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.945 12:14:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.946 12:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.946 12:14:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.946 12:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.946 12:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.946 12:14:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.946 12:14:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.946 12:14:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.946 12:14:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.946 12:14:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.203 12:14:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.461 12:14:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.461 12:14:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.720 12:14:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.979 [2024-12-10 12:14:01.899425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.979 [2024-12-10 12:14:01.936679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.979 [2024-12-10 12:14:01.936680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.979 [2024-12-10 12:14:01.977507] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.979 [2024-12-10 12:14:01.977547] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.264 12:14:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.264 12:14:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.264 spdk_app_start Round 1 00:05:43.264 12:14:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1442427 /var/tmp/spdk-nbd.sock 00:05:43.264 12:14:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1442427 ']' 00:05:43.264 12:14:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.264 12:14:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.264 12:14:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.264 12:14:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.264 12:14:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.264 12:14:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.264 12:14:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.264 12:14:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.264 Malloc0 00:05:43.264 12:14:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.264 Malloc1 00:05:43.264 12:14:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.264 12:14:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.523 /dev/nbd0 00:05:43.523 12:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.523 12:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.523 1+0 records in 00:05:43.523 1+0 records out 00:05:43.523 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245842 s, 16.7 MB/s 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.523 12:14:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.523 12:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.523 12:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.523 12:14:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.782 /dev/nbd1 00:05:43.782 12:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.782 12:14:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.782 1+0 records in 00:05:43.782 1+0 records out 00:05:43.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196524 s, 20.8 MB/s 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:43.782 12:14:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:43.782 12:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.782 12:14:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.782 12:14:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.782 12:14:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.783 12:14:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.041 12:14:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.041 { 00:05:44.041 "nbd_device": "/dev/nbd0", 00:05:44.041 "bdev_name": "Malloc0" 00:05:44.041 }, 00:05:44.041 { 00:05:44.041 "nbd_device": "/dev/nbd1", 00:05:44.041 "bdev_name": "Malloc1" 00:05:44.041 } 00:05:44.041 ]' 00:05:44.041 12:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.041 { 00:05:44.041 "nbd_device": "/dev/nbd0", 00:05:44.041 "bdev_name": "Malloc0" 00:05:44.041 }, 00:05:44.041 { 00:05:44.041 "nbd_device": "/dev/nbd1", 00:05:44.041 "bdev_name": "Malloc1" 00:05:44.041 } 00:05:44.041 ]' 00:05:44.041 12:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.042 /dev/nbd1' 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.042 /dev/nbd1' 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.042 256+0 records in 00:05:44.042 256+0 records out 00:05:44.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105913 s, 99.0 MB/s 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.042 256+0 records in 00:05:44.042 256+0 records out 00:05:44.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142979 s, 73.3 MB/s 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.042 256+0 records in 00:05:44.042 256+0 records out 00:05:44.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159001 s, 65.9 MB/s 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.042 12:14:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.300 12:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.300 12:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.300 12:14:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.300 12:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.300 12:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.300 12:14:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.300 12:14:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.300 12:14:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.300 12:14:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.300 12:14:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.559 12:14:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.817 12:14:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.817 12:14:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.076 12:14:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.076 [2024-12-10 12:14:07.237061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.380 [2024-12-10 12:14:07.280025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.380 [2024-12-10 12:14:07.280026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.380 [2024-12-10 12:14:07.321933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.380 [2024-12-10 12:14:07.321972] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:48.014 12:14:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:48.014 12:14:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:48.014 spdk_app_start Round 2 00:05:48.014 12:14:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1442427 /var/tmp/spdk-nbd.sock 00:05:48.014 12:14:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1442427 ']' 00:05:48.014 12:14:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:48.014 12:14:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.014 12:14:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:48.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:48.014 12:14:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.014 12:14:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.273 12:14:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.273 12:14:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:48.273 12:14:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.531 Malloc0 00:05:48.531 12:14:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.790 Malloc1 00:05:48.790 12:14:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.790 12:14:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.049 /dev/nbd0 00:05:49.049 12:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.049 12:14:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.049 12:14:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:49.049 12:14:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.049 12:14:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.049 12:14:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.049 12:14:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:49.049 12:14:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.049 12:14:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.049 12:14:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.049 12:14:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.049 1+0 records in 00:05:49.049 1+0 records out 00:05:49.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201658 s, 20.3 MB/s 00:05:49.049 12:14:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:49.049 12:14:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.049 12:14:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:49.049 12:14:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.049 12:14:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.049 12:14:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.049 12:14:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.049 12:14:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.049 /dev/nbd1 00:05:49.308 12:14:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.308 12:14:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.308 1+0 records in 00:05:49.308 1+0 records out 00:05:49.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233234 s, 17.6 MB/s 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.308 12:14:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.308 12:14:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.308 12:14:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.308 12:14:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.308 12:14:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.308 12:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.308 12:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.308 { 00:05:49.308 "nbd_device": "/dev/nbd0", 00:05:49.308 "bdev_name": "Malloc0" 00:05:49.308 }, 00:05:49.308 { 00:05:49.308 "nbd_device": "/dev/nbd1", 00:05:49.308 "bdev_name": "Malloc1" 00:05:49.308 } 00:05:49.308 ]' 00:05:49.308 12:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.308 { 00:05:49.308 "nbd_device": "/dev/nbd0", 00:05:49.308 "bdev_name": "Malloc0" 00:05:49.308 }, 00:05:49.308 { 00:05:49.308 "nbd_device": "/dev/nbd1", 00:05:49.308 "bdev_name": "Malloc1" 00:05:49.308 } 00:05:49.308 ]' 00:05:49.308 12:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.567 /dev/nbd1' 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.567 /dev/nbd1' 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.567 256+0 records in 00:05:49.567 256+0 records out 00:05:49.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010066 s, 104 MB/s 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.567 256+0 records in 00:05:49.567 256+0 records out 00:05:49.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144125 s, 72.8 MB/s 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.567 256+0 records in 00:05:49.567 256+0 records out 00:05:49.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152608 s, 68.7 MB/s 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.567 12:14:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.825 12:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.825 12:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.825 12:14:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.825 12:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.825 12:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.825 12:14:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.825 12:14:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.825 12:14:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.825 12:14:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.825 12:14:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.826 12:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.084 12:14:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.084 12:14:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.084 12:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.084 12:14:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.084 12:14:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.084 12:14:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.084 12:14:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.084 12:14:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.084 12:14:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.084 12:14:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.084 12:14:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.084 12:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.084 12:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.084 12:14:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.084 12:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.084 12:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.343 12:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.343 12:14:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.343 12:14:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.343 12:14:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.343 12:14:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.343 12:14:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.343 12:14:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.343 12:14:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.602 [2024-12-10 12:14:12.610296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.602 [2024-12-10 12:14:12.646716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.602 [2024-12-10 12:14:12.646717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.602 [2024-12-10 12:14:12.687724] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.602 [2024-12-10 12:14:12.687762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.886 12:14:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1442427 /var/tmp/spdk-nbd.sock 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1442427 ']' 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.886 12:14:15 event.app_repeat -- event/event.sh@39 -- # killprocess 1442427 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1442427 ']' 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1442427 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1442427 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1442427' 00:05:53.886 killing process with pid 1442427 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1442427 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1442427 00:05:53.886 spdk_app_start is called in Round 0. 00:05:53.886 Shutdown signal received, stop current app iteration 00:05:53.886 Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 reinitialization... 00:05:53.886 spdk_app_start is called in Round 1. 00:05:53.886 Shutdown signal received, stop current app iteration 00:05:53.886 Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 reinitialization... 00:05:53.886 spdk_app_start is called in Round 2. 00:05:53.886 Shutdown signal received, stop current app iteration 00:05:53.886 Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 reinitialization... 00:05:53.886 spdk_app_start is called in Round 3. 00:05:53.886 Shutdown signal received, stop current app iteration 00:05:53.886 12:14:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:53.886 12:14:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:53.886 00:05:53.886 real 0m16.589s 00:05:53.886 user 0m36.527s 00:05:53.886 sys 0m2.607s 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.886 12:14:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.886 ************************************ 00:05:53.886 END TEST app_repeat 00:05:53.886 ************************************ 00:05:53.886 12:14:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:53.886 12:14:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/cpu_locks.sh 00:05:53.886 12:14:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.886 12:14:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.886 12:14:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.886 ************************************ 00:05:53.886 START TEST cpu_locks 00:05:53.886 ************************************ 00:05:53.886 12:14:15 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/cpu_locks.sh 00:05:53.886 * Looking for test storage... 00:05:53.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event 00:05:53.886 12:14:16 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:53.886 12:14:16 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:53.886 12:14:16 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.145 12:14:16 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:54.145 12:14:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:54.146 12:14:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.146 12:14:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:54.146 12:14:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.146 12:14:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.146 12:14:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.146 12:14:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:54.146 12:14:16 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.146 12:14:16 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.146 --rc genhtml_branch_coverage=1 00:05:54.146 --rc genhtml_function_coverage=1 00:05:54.146 --rc genhtml_legend=1 00:05:54.146 --rc geninfo_all_blocks=1 00:05:54.146 --rc geninfo_unexecuted_blocks=1 00:05:54.146 00:05:54.146 ' 00:05:54.146 12:14:16 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.146 --rc genhtml_branch_coverage=1 00:05:54.146 --rc genhtml_function_coverage=1 00:05:54.146 --rc genhtml_legend=1 00:05:54.146 --rc geninfo_all_blocks=1 00:05:54.146 --rc geninfo_unexecuted_blocks=1 00:05:54.146 00:05:54.146 ' 00:05:54.146 12:14:16 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.146 --rc genhtml_branch_coverage=1 00:05:54.146 --rc genhtml_function_coverage=1 00:05:54.146 --rc genhtml_legend=1 00:05:54.146 --rc geninfo_all_blocks=1 00:05:54.146 --rc geninfo_unexecuted_blocks=1 00:05:54.146 00:05:54.146 ' 00:05:54.146 12:14:16 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.146 --rc genhtml_branch_coverage=1 00:05:54.146 --rc genhtml_function_coverage=1 00:05:54.146 --rc genhtml_legend=1 00:05:54.146 --rc geninfo_all_blocks=1 00:05:54.146 --rc geninfo_unexecuted_blocks=1 00:05:54.146 00:05:54.146 ' 00:05:54.146 12:14:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.146 12:14:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.146 12:14:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.146 12:14:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.146 12:14:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.146 12:14:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.146 12:14:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.146 ************************************ 00:05:54.146 START TEST default_locks 00:05:54.146 ************************************ 00:05:54.146 12:14:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:54.146 12:14:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1445446 00:05:54.146 12:14:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1445446 00:05:54.146 12:14:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.146 12:14:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1445446 ']' 00:05:54.146 12:14:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.146 12:14:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.146 12:14:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.146 12:14:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.146 12:14:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.146 [2024-12-10 12:14:16.192740] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:54.146 [2024-12-10 12:14:16.192783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445446 ] 00:05:54.146 [2024-12-10 12:14:16.268783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.146 [2024-12-10 12:14:16.310420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.405 12:14:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.405 12:14:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:54.405 12:14:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1445446 00:05:54.405 12:14:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.405 12:14:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1445446 00:05:55.340 lslocks: write error 00:05:55.340 12:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1445446 00:05:55.340 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1445446 ']' 00:05:55.340 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1445446 00:05:55.340 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:55.340 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.340 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1445446 00:05:55.340 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.341 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.341 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1445446' 00:05:55.341 killing process with pid 1445446 00:05:55.341 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1445446 00:05:55.341 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1445446 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1445446 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1445446 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1445446 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1445446 ']' 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (1445446) - No such process 00:05:55.600 ERROR: process (pid: 1445446) is no longer running 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.600 00:05:55.600 real 0m1.378s 00:05:55.600 user 0m1.341s 00:05:55.600 sys 0m0.593s 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.600 12:14:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.600 ************************************ 00:05:55.600 END TEST default_locks 00:05:55.600 ************************************ 00:05:55.600 12:14:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:55.600 12:14:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.600 12:14:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.600 12:14:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.600 ************************************ 00:05:55.600 START TEST default_locks_via_rpc 00:05:55.600 ************************************ 00:05:55.600 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:55.600 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1445747 00:05:55.600 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1445747 00:05:55.600 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.600 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1445747 ']' 00:05:55.600 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.600 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.600 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.600 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.600 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.600 [2024-12-10 12:14:17.639731] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:55.600 [2024-12-10 12:14:17.639772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445747 ] 00:05:55.600 [2024-12-10 12:14:17.717369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.600 [2024-12-10 12:14:17.758758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1445747 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1445747 00:05:55.859 12:14:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1445747 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1445747 ']' 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1445747 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1445747 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1445747' 00:05:56.425 killing process with pid 1445747 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1445747 00:05:56.425 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1445747 00:05:56.683 00:05:56.683 real 0m1.129s 00:05:56.683 user 0m1.085s 00:05:56.683 sys 0m0.501s 00:05:56.683 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.683 12:14:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.683 ************************************ 00:05:56.683 END TEST default_locks_via_rpc 00:05:56.683 ************************************ 00:05:56.683 12:14:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:56.683 12:14:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.683 12:14:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.683 12:14:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.683 ************************************ 00:05:56.683 START TEST non_locking_app_on_locked_coremask 00:05:56.683 ************************************ 00:05:56.683 12:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:56.683 12:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1445952 00:05:56.683 12:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.683 12:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1445952 /var/tmp/spdk.sock 00:05:56.683 12:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1445952 ']' 00:05:56.683 12:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.683 12:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.683 12:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.683 12:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.683 12:14:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.683 [2024-12-10 12:14:18.830734] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:56.684 [2024-12-10 12:14:18.830772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1445952 ] 00:05:56.942 [2024-12-10 12:14:18.907573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.942 [2024-12-10 12:14:18.949009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1446133 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1446133 /var/tmp/spdk2.sock 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1446133 ']' 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.201 12:14:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.201 [2024-12-10 12:14:19.216925] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:57.201 [2024-12-10 12:14:19.216977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1446133 ] 00:05:57.201 [2024-12-10 12:14:19.310819] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.201 [2024-12-10 12:14:19.310849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.459 [2024-12-10 12:14:19.398910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.026 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.026 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.026 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1445952 00:05:58.026 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1445952 00:05:58.026 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.593 lslocks: write error 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1445952 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1445952 ']' 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1445952 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1445952 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1445952' 00:05:58.593 killing process with pid 1445952 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1445952 00:05:58.593 12:14:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1445952 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1446133 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1446133 ']' 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1446133 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1446133 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1446133' 00:05:59.161 killing process with pid 1446133 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1446133 00:05:59.161 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1446133 00:05:59.420 00:05:59.420 real 0m2.750s 00:05:59.420 user 0m2.896s 00:05:59.420 sys 0m0.911s 00:05:59.420 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.420 12:14:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.420 ************************************ 00:05:59.420 END TEST non_locking_app_on_locked_coremask 00:05:59.420 ************************************ 00:05:59.420 12:14:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:59.420 12:14:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.420 12:14:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.420 12:14:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.679 ************************************ 00:05:59.679 START TEST locking_app_on_unlocked_coremask 00:05:59.679 ************************************ 00:05:59.679 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:59.679 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1446452 00:05:59.679 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1446452 /var/tmp/spdk.sock 00:05:59.679 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:59.679 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1446452 ']' 00:05:59.679 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.679 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.679 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.679 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.679 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.679 [2024-12-10 12:14:21.655295] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:59.679 [2024-12-10 12:14:21.655345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1446452 ] 00:05:59.679 [2024-12-10 12:14:21.729897] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.679 [2024-12-10 12:14:21.729923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.679 [2024-12-10 12:14:21.769223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.938 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.938 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.938 12:14:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1446670 00:05:59.938 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1446670 /var/tmp/spdk2.sock 00:05:59.938 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.938 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1446670 ']' 00:05:59.938 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.938 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.938 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.938 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.938 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.938 [2024-12-10 12:14:22.052676] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:05:59.938 [2024-12-10 12:14:22.052728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1446670 ] 00:06:00.197 [2024-12-10 12:14:22.146078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.197 [2024-12-10 12:14:22.226177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.764 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.764 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.764 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1446670 00:06:00.764 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.764 12:14:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1446670 00:06:01.331 lslocks: write error 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1446452 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1446452 ']' 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1446452 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1446452 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1446452' 00:06:01.331 killing process with pid 1446452 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1446452 00:06:01.331 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1446452 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1446670 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1446670 ']' 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1446670 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1446670 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1446670' 00:06:01.899 killing process with pid 1446670 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1446670 00:06:01.899 12:14:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1446670 00:06:02.158 00:06:02.158 real 0m2.628s 00:06:02.158 user 0m2.784s 00:06:02.158 sys 0m0.846s 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.158 ************************************ 00:06:02.158 END TEST locking_app_on_unlocked_coremask 00:06:02.158 ************************************ 00:06:02.158 12:14:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:02.158 12:14:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.158 12:14:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.158 12:14:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.158 ************************************ 00:06:02.158 START TEST locking_app_on_locked_coremask 00:06:02.158 ************************************ 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1446945 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1446945 /var/tmp/spdk.sock 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1446945 ']' 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.158 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.417 [2024-12-10 12:14:24.349224] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:02.417 [2024-12-10 12:14:24.349265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1446945 ] 00:06:02.417 [2024-12-10 12:14:24.427528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.417 [2024-12-10 12:14:24.468859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1447133 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1447133 /var/tmp/spdk2.sock 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1447133 /var/tmp/spdk2.sock 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1447133 /var/tmp/spdk2.sock 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1447133 ']' 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.676 12:14:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.676 [2024-12-10 12:14:24.737714] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:02.676 [2024-12-10 12:14:24.737763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447133 ] 00:06:02.676 [2024-12-10 12:14:24.826412] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1446945 has claimed it. 00:06:02.676 [2024-12-10 12:14:24.826443] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (1447133) - No such process 00:06:03.243 ERROR: process (pid: 1447133) is no longer running 00:06:03.243 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.243 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:03.243 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:03.243 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:03.243 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:03.243 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:03.243 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1446945 00:06:03.243 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1446945 00:06:03.243 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.501 lslocks: write error 00:06:03.501 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1446945 00:06:03.501 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1446945 ']' 00:06:03.501 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1446945 00:06:03.501 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.760 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.760 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1446945 00:06:03.760 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.760 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.760 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1446945' 00:06:03.760 killing process with pid 1446945 00:06:03.760 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1446945 00:06:03.760 12:14:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1446945 00:06:04.019 00:06:04.019 real 0m1.724s 00:06:04.019 user 0m1.841s 00:06:04.019 sys 0m0.572s 00:06:04.019 12:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.019 12:14:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.019 ************************************ 00:06:04.019 END TEST locking_app_on_locked_coremask 00:06:04.019 ************************************ 00:06:04.019 12:14:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:04.019 12:14:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.019 12:14:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.019 12:14:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.019 ************************************ 00:06:04.019 START TEST locking_overlapped_coremask 00:06:04.019 ************************************ 00:06:04.019 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:04.019 12:14:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1447401 00:06:04.019 12:14:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1447401 /var/tmp/spdk.sock 00:06:04.019 12:14:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x7 00:06:04.019 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1447401 ']' 00:06:04.019 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.019 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.019 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.019 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.019 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.019 [2024-12-10 12:14:26.141955] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:04.020 [2024-12-10 12:14:26.141998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447401 ] 00:06:04.278 [2024-12-10 12:14:26.216734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.278 [2024-12-10 12:14:26.258213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.278 [2024-12-10 12:14:26.258250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.278 [2024-12-10 12:14:26.258250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1447432 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1447432 /var/tmp/spdk2.sock 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1447432 /var/tmp/spdk2.sock 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1447432 /var/tmp/spdk2.sock 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1447432 ']' 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.537 12:14:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.537 [2024-12-10 12:14:26.527684] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:04.537 [2024-12-10 12:14:26.527728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447432 ] 00:06:04.537 [2024-12-10 12:14:26.618738] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1447401 has claimed it. 00:06:04.537 [2024-12-10 12:14:26.618777] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (1447432) - No such process 00:06:05.103 ERROR: process (pid: 1447432) is no longer running 00:06:05.103 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.103 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:05.103 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:05.103 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.103 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.103 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.103 12:14:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:05.103 12:14:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1447401 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1447401 ']' 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1447401 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1447401 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1447401' 00:06:05.104 killing process with pid 1447401 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1447401 00:06:05.104 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1447401 00:06:05.670 00:06:05.670 real 0m1.441s 00:06:05.670 user 0m3.976s 00:06:05.670 sys 0m0.388s 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.670 ************************************ 00:06:05.670 END TEST locking_overlapped_coremask 00:06:05.670 ************************************ 00:06:05.670 12:14:27 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:05.670 12:14:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.670 12:14:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.670 12:14:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.670 ************************************ 00:06:05.670 START TEST locking_overlapped_coremask_via_rpc 00:06:05.670 ************************************ 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1447696 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1447696 /var/tmp/spdk.sock 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1447696 ']' 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.670 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.670 [2024-12-10 12:14:27.655047] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:05.670 [2024-12-10 12:14:27.655092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447696 ] 00:06:05.670 [2024-12-10 12:14:27.731104] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.670 [2024-12-10 12:14:27.731129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.670 [2024-12-10 12:14:27.771240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.670 [2024-12-10 12:14:27.771349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.670 [2024-12-10 12:14:27.771349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1447709 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1447709 /var/tmp/spdk2.sock 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1447709 ']' 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.929 12:14:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.929 [2024-12-10 12:14:28.047386] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:05.929 [2024-12-10 12:14:28.047435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447709 ] 00:06:06.188 [2024-12-10 12:14:28.139908] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.188 [2024-12-10 12:14:28.139937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.188 [2024-12-10 12:14:28.225598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.188 [2024-12-10 12:14:28.225713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.188 [2024-12-10 12:14:28.225714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.757 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.757 [2024-12-10 12:14:28.895238] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1447696 has claimed it. 00:06:06.757 request: 00:06:06.758 { 00:06:06.758 "method": "framework_enable_cpumask_locks", 00:06:06.758 "req_id": 1 00:06:06.758 } 00:06:06.758 Got JSON-RPC error response 00:06:06.758 response: 00:06:06.758 { 00:06:06.758 "code": -32603, 00:06:06.758 "message": "Failed to claim CPU core: 2" 00:06:06.758 } 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1447696 /var/tmp/spdk.sock 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1447696 ']' 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.758 12:14:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.018 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.018 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:07.018 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1447709 /var/tmp/spdk2.sock 00:06:07.018 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1447709 ']' 00:06:07.018 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.018 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.018 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.018 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.018 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.277 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.277 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:07.277 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:07.277 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.277 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.277 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.277 00:06:07.277 real 0m1.709s 00:06:07.277 user 0m0.838s 00:06:07.277 sys 0m0.128s 00:06:07.277 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.277 12:14:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.277 ************************************ 00:06:07.277 END TEST locking_overlapped_coremask_via_rpc 00:06:07.277 ************************************ 00:06:07.277 12:14:29 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:07.277 12:14:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1447696 ]] 00:06:07.277 12:14:29 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1447696 00:06:07.277 12:14:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1447696 ']' 00:06:07.277 12:14:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1447696 00:06:07.277 12:14:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:07.277 12:14:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.277 12:14:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1447696 00:06:07.277 12:14:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.277 12:14:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.277 12:14:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1447696' 00:06:07.277 killing process with pid 1447696 00:06:07.277 12:14:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1447696 00:06:07.277 12:14:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1447696 00:06:07.845 12:14:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1447709 ]] 00:06:07.845 12:14:29 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1447709 00:06:07.845 12:14:29 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1447709 ']' 00:06:07.845 12:14:29 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1447709 00:06:07.845 12:14:29 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:07.845 12:14:29 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.845 12:14:29 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1447709 00:06:07.845 12:14:29 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:07.845 12:14:29 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:07.845 12:14:29 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1447709' 00:06:07.845 killing process with pid 1447709 00:06:07.845 12:14:29 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1447709 00:06:07.845 12:14:29 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1447709 00:06:08.105 12:14:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.105 12:14:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:08.105 12:14:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1447696 ]] 00:06:08.105 12:14:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1447696 00:06:08.105 12:14:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1447696 ']' 00:06:08.105 12:14:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1447696 00:06:08.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (1447696) - No such process 00:06:08.105 12:14:30 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1447696 is not found' 00:06:08.105 Process with pid 1447696 is not found 00:06:08.105 12:14:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1447709 ]] 00:06:08.105 12:14:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1447709 00:06:08.105 12:14:30 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1447709 ']' 00:06:08.105 12:14:30 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1447709 00:06:08.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (1447709) - No such process 00:06:08.105 12:14:30 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1447709 is not found' 00:06:08.105 Process with pid 1447709 is not found 00:06:08.105 12:14:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:08.105 00:06:08.105 real 0m14.156s 00:06:08.105 user 0m24.504s 00:06:08.105 sys 0m4.926s 00:06:08.105 12:14:30 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.105 12:14:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.105 ************************************ 00:06:08.105 END TEST cpu_locks 00:06:08.105 ************************************ 00:06:08.105 00:06:08.105 real 0m39.346s 00:06:08.105 user 1m15.421s 00:06:08.105 sys 0m8.516s 00:06:08.105 12:14:30 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.105 12:14:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.105 ************************************ 00:06:08.105 END TEST event 00:06:08.105 ************************************ 00:06:08.105 12:14:30 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/thread.sh 00:06:08.105 12:14:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.105 12:14:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.105 12:14:30 -- common/autotest_common.sh@10 -- # set +x 00:06:08.105 ************************************ 00:06:08.105 START TEST thread 00:06:08.105 ************************************ 00:06:08.105 12:14:30 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/thread.sh 00:06:08.105 * Looking for test storage... 00:06:08.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:08.364 12:14:30 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.364 12:14:30 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.364 12:14:30 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.364 12:14:30 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.364 12:14:30 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.364 12:14:30 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.364 12:14:30 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.364 12:14:30 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.364 12:14:30 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.364 12:14:30 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.364 12:14:30 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.364 12:14:30 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:08.364 12:14:30 thread -- scripts/common.sh@345 -- # : 1 00:06:08.364 12:14:30 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.364 12:14:30 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.364 12:14:30 thread -- scripts/common.sh@365 -- # decimal 1 00:06:08.364 12:14:30 thread -- scripts/common.sh@353 -- # local d=1 00:06:08.364 12:14:30 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.364 12:14:30 thread -- scripts/common.sh@355 -- # echo 1 00:06:08.364 12:14:30 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.364 12:14:30 thread -- scripts/common.sh@366 -- # decimal 2 00:06:08.364 12:14:30 thread -- scripts/common.sh@353 -- # local d=2 00:06:08.364 12:14:30 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.364 12:14:30 thread -- scripts/common.sh@355 -- # echo 2 00:06:08.364 12:14:30 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.364 12:14:30 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.364 12:14:30 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.364 12:14:30 thread -- scripts/common.sh@368 -- # return 0 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:08.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.364 --rc genhtml_branch_coverage=1 00:06:08.364 --rc genhtml_function_coverage=1 00:06:08.364 --rc genhtml_legend=1 00:06:08.364 --rc geninfo_all_blocks=1 00:06:08.364 --rc geninfo_unexecuted_blocks=1 00:06:08.364 00:06:08.364 ' 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:08.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.364 --rc genhtml_branch_coverage=1 00:06:08.364 --rc genhtml_function_coverage=1 00:06:08.364 --rc genhtml_legend=1 00:06:08.364 --rc geninfo_all_blocks=1 00:06:08.364 --rc geninfo_unexecuted_blocks=1 00:06:08.364 00:06:08.364 ' 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:08.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.364 --rc genhtml_branch_coverage=1 00:06:08.364 --rc genhtml_function_coverage=1 00:06:08.364 --rc genhtml_legend=1 00:06:08.364 --rc geninfo_all_blocks=1 00:06:08.364 --rc geninfo_unexecuted_blocks=1 00:06:08.364 00:06:08.364 ' 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:08.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.364 --rc genhtml_branch_coverage=1 00:06:08.364 --rc genhtml_function_coverage=1 00:06:08.364 --rc genhtml_legend=1 00:06:08.364 --rc geninfo_all_blocks=1 00:06:08.364 --rc geninfo_unexecuted_blocks=1 00:06:08.364 00:06:08.364 ' 00:06:08.364 12:14:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.364 12:14:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.364 ************************************ 00:06:08.364 START TEST thread_poller_perf 00:06:08.364 ************************************ 00:06:08.364 12:14:30 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:08.364 [2024-12-10 12:14:30.411625] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:08.364 [2024-12-10 12:14:30.411692] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448266 ] 00:06:08.364 [2024-12-10 12:14:30.490908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.622 [2024-12-10 12:14:30.532915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.622 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:09.558 [2024-12-10T11:14:31.726Z] ====================================== 00:06:09.558 [2024-12-10T11:14:31.726Z] busy:2307825382 (cyc) 00:06:09.558 [2024-12-10T11:14:31.726Z] total_run_count: 402000 00:06:09.558 [2024-12-10T11:14:31.726Z] tsc_hz: 2300000000 (cyc) 00:06:09.558 [2024-12-10T11:14:31.726Z] ====================================== 00:06:09.558 [2024-12-10T11:14:31.726Z] poller_cost: 5740 (cyc), 2495 (nsec) 00:06:09.558 00:06:09.558 real 0m1.190s 00:06:09.558 user 0m1.110s 00:06:09.558 sys 0m0.077s 00:06:09.558 12:14:31 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.558 12:14:31 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.558 ************************************ 00:06:09.558 END TEST thread_poller_perf 00:06:09.558 ************************************ 00:06:09.558 12:14:31 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.558 12:14:31 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:09.558 12:14:31 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.558 12:14:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.558 ************************************ 00:06:09.558 START TEST thread_poller_perf 00:06:09.558 ************************************ 00:06:09.558 12:14:31 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.558 [2024-12-10 12:14:31.670387] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:09.558 [2024-12-10 12:14:31.670457] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448516 ] 00:06:09.817 [2024-12-10 12:14:31.748676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.817 [2024-12-10 12:14:31.787553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.817 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:10.753 [2024-12-10T11:14:32.921Z] ====================================== 00:06:10.753 [2024-12-10T11:14:32.921Z] busy:2301372724 (cyc) 00:06:10.753 [2024-12-10T11:14:32.921Z] total_run_count: 4975000 00:06:10.753 [2024-12-10T11:14:32.921Z] tsc_hz: 2300000000 (cyc) 00:06:10.753 [2024-12-10T11:14:32.921Z] ====================================== 00:06:10.753 [2024-12-10T11:14:32.921Z] poller_cost: 462 (cyc), 200 (nsec) 00:06:10.753 00:06:10.753 real 0m1.176s 00:06:10.753 user 0m1.099s 00:06:10.753 sys 0m0.074s 00:06:10.753 12:14:32 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.753 12:14:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.753 ************************************ 00:06:10.753 END TEST thread_poller_perf 00:06:10.753 ************************************ 00:06:10.753 12:14:32 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:10.753 00:06:10.753 real 0m2.675s 00:06:10.753 user 0m2.361s 00:06:10.753 sys 0m0.328s 00:06:10.753 12:14:32 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.753 12:14:32 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.753 ************************************ 00:06:10.753 END TEST thread 00:06:10.753 ************************************ 00:06:10.753 12:14:32 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:10.753 12:14:32 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/cmdline.sh 00:06:10.753 12:14:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.753 12:14:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.753 12:14:32 -- common/autotest_common.sh@10 -- # set +x 00:06:11.013 ************************************ 00:06:11.013 START TEST app_cmdline 00:06:11.013 ************************************ 00:06:11.013 12:14:32 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/cmdline.sh 00:06:11.013 * Looking for test storage... 00:06:11.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.013 12:14:33 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:11.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.013 --rc genhtml_branch_coverage=1 00:06:11.013 --rc genhtml_function_coverage=1 00:06:11.013 --rc genhtml_legend=1 00:06:11.013 --rc geninfo_all_blocks=1 00:06:11.013 --rc geninfo_unexecuted_blocks=1 00:06:11.013 00:06:11.013 ' 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:11.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.013 --rc genhtml_branch_coverage=1 00:06:11.013 --rc genhtml_function_coverage=1 00:06:11.013 --rc genhtml_legend=1 00:06:11.013 --rc geninfo_all_blocks=1 00:06:11.013 --rc geninfo_unexecuted_blocks=1 00:06:11.013 00:06:11.013 ' 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:11.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.013 --rc genhtml_branch_coverage=1 00:06:11.013 --rc genhtml_function_coverage=1 00:06:11.013 --rc genhtml_legend=1 00:06:11.013 --rc geninfo_all_blocks=1 00:06:11.013 --rc geninfo_unexecuted_blocks=1 00:06:11.013 00:06:11.013 ' 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:11.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.013 --rc genhtml_branch_coverage=1 00:06:11.013 --rc genhtml_function_coverage=1 00:06:11.013 --rc genhtml_legend=1 00:06:11.013 --rc geninfo_all_blocks=1 00:06:11.013 --rc geninfo_unexecuted_blocks=1 00:06:11.013 00:06:11.013 ' 00:06:11.013 12:14:33 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:11.013 12:14:33 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1448817 00:06:11.013 12:14:33 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1448817 00:06:11.013 12:14:33 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1448817 ']' 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.013 12:14:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.013 [2024-12-10 12:14:33.161582] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:11.013 [2024-12-10 12:14:33.161632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448817 ] 00:06:11.272 [2024-12-10 12:14:33.237661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.272 [2024-12-10 12:14:33.278741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.530 12:14:33 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.530 12:14:33 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:11.530 12:14:33 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py spdk_get_version 00:06:11.530 { 00:06:11.530 "version": "SPDK v25.01-pre git sha1 92d1e663a", 00:06:11.530 "fields": { 00:06:11.530 "major": 25, 00:06:11.530 "minor": 1, 00:06:11.530 "patch": 0, 00:06:11.530 "suffix": "-pre", 00:06:11.530 "commit": "92d1e663a" 00:06:11.530 } 00:06:11.530 } 00:06:11.530 12:14:33 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:11.530 12:14:33 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:11.530 12:14:33 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:11.530 12:14:33 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:11.530 12:14:33 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:11.530 12:14:33 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.530 12:14:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.530 12:14:33 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:11.530 12:14:33 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:11.530 12:14:33 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.789 12:14:33 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:11.789 12:14:33 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:11.789 12:14:33 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.789 12:14:33 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:11.789 12:14:33 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.789 12:14:33 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:06:11.789 12:14:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.789 12:14:33 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:06:11.789 12:14:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.789 12:14:33 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:06:11.789 12:14:33 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.789 12:14:33 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.790 request: 00:06:11.790 { 00:06:11.790 "method": "env_dpdk_get_mem_stats", 00:06:11.790 "req_id": 1 00:06:11.790 } 00:06:11.790 Got JSON-RPC error response 00:06:11.790 response: 00:06:11.790 { 00:06:11.790 "code": -32601, 00:06:11.790 "message": "Method not found" 00:06:11.790 } 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.790 12:14:33 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1448817 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1448817 ']' 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1448817 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.790 12:14:33 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1448817 00:06:12.048 12:14:33 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.048 12:14:33 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.048 12:14:33 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1448817' 00:06:12.049 killing process with pid 1448817 00:06:12.049 12:14:33 app_cmdline -- common/autotest_common.sh@973 -- # kill 1448817 00:06:12.049 12:14:33 app_cmdline -- common/autotest_common.sh@978 -- # wait 1448817 00:06:12.307 00:06:12.307 real 0m1.336s 00:06:12.307 user 0m1.556s 00:06:12.307 sys 0m0.443s 00:06:12.307 12:14:34 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.307 12:14:34 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.307 ************************************ 00:06:12.307 END TEST app_cmdline 00:06:12.307 ************************************ 00:06:12.307 12:14:34 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/version.sh 00:06:12.307 12:14:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.307 12:14:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.307 12:14:34 -- common/autotest_common.sh@10 -- # set +x 00:06:12.307 ************************************ 00:06:12.307 START TEST version 00:06:12.307 ************************************ 00:06:12.307 12:14:34 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/version.sh 00:06:12.307 * Looking for test storage... 00:06:12.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:06:12.307 12:14:34 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.307 12:14:34 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.308 12:14:34 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.567 12:14:34 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.567 12:14:34 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.567 12:14:34 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.567 12:14:34 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.567 12:14:34 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.567 12:14:34 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.567 12:14:34 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.567 12:14:34 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.567 12:14:34 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.567 12:14:34 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.567 12:14:34 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.567 12:14:34 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.567 12:14:34 version -- scripts/common.sh@344 -- # case "$op" in 00:06:12.567 12:14:34 version -- scripts/common.sh@345 -- # : 1 00:06:12.567 12:14:34 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.567 12:14:34 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.567 12:14:34 version -- scripts/common.sh@365 -- # decimal 1 00:06:12.567 12:14:34 version -- scripts/common.sh@353 -- # local d=1 00:06:12.567 12:14:34 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.567 12:14:34 version -- scripts/common.sh@355 -- # echo 1 00:06:12.567 12:14:34 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.567 12:14:34 version -- scripts/common.sh@366 -- # decimal 2 00:06:12.567 12:14:34 version -- scripts/common.sh@353 -- # local d=2 00:06:12.567 12:14:34 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.567 12:14:34 version -- scripts/common.sh@355 -- # echo 2 00:06:12.567 12:14:34 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.567 12:14:34 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.567 12:14:34 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.567 12:14:34 version -- scripts/common.sh@368 -- # return 0 00:06:12.567 12:14:34 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.567 12:14:34 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.567 --rc genhtml_branch_coverage=1 00:06:12.567 --rc genhtml_function_coverage=1 00:06:12.567 --rc genhtml_legend=1 00:06:12.567 --rc geninfo_all_blocks=1 00:06:12.567 --rc geninfo_unexecuted_blocks=1 00:06:12.567 00:06:12.567 ' 00:06:12.567 12:14:34 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.567 --rc genhtml_branch_coverage=1 00:06:12.567 --rc genhtml_function_coverage=1 00:06:12.567 --rc genhtml_legend=1 00:06:12.567 --rc geninfo_all_blocks=1 00:06:12.567 --rc geninfo_unexecuted_blocks=1 00:06:12.567 00:06:12.567 ' 00:06:12.567 12:14:34 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.567 --rc genhtml_branch_coverage=1 00:06:12.567 --rc genhtml_function_coverage=1 00:06:12.567 --rc genhtml_legend=1 00:06:12.567 --rc geninfo_all_blocks=1 00:06:12.567 --rc geninfo_unexecuted_blocks=1 00:06:12.567 00:06:12.567 ' 00:06:12.567 12:14:34 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.567 --rc genhtml_branch_coverage=1 00:06:12.567 --rc genhtml_function_coverage=1 00:06:12.567 --rc genhtml_legend=1 00:06:12.567 --rc geninfo_all_blocks=1 00:06:12.567 --rc geninfo_unexecuted_blocks=1 00:06:12.567 00:06:12.567 ' 00:06:12.567 12:14:34 version -- app/version.sh@17 -- # get_header_version major 00:06:12.567 12:14:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:06:12.567 12:14:34 version -- app/version.sh@14 -- # cut -f2 00:06:12.567 12:14:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.567 12:14:34 version -- app/version.sh@17 -- # major=25 00:06:12.567 12:14:34 version -- app/version.sh@18 -- # get_header_version minor 00:06:12.567 12:14:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:06:12.567 12:14:34 version -- app/version.sh@14 -- # cut -f2 00:06:12.567 12:14:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.567 12:14:34 version -- app/version.sh@18 -- # minor=1 00:06:12.567 12:14:34 version -- app/version.sh@19 -- # get_header_version patch 00:06:12.567 12:14:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:06:12.567 12:14:34 version -- app/version.sh@14 -- # cut -f2 00:06:12.567 12:14:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.567 12:14:34 version -- app/version.sh@19 -- # patch=0 00:06:12.567 12:14:34 version -- app/version.sh@20 -- # get_header_version suffix 00:06:12.567 12:14:34 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:06:12.567 12:14:34 version -- app/version.sh@14 -- # cut -f2 00:06:12.567 12:14:34 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.567 12:14:34 version -- app/version.sh@20 -- # suffix=-pre 00:06:12.567 12:14:34 version -- app/version.sh@22 -- # version=25.1 00:06:12.567 12:14:34 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:12.567 12:14:34 version -- app/version.sh@28 -- # version=25.1rc0 00:06:12.567 12:14:34 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:06:12.568 12:14:34 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:12.568 12:14:34 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:12.568 12:14:34 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:12.568 00:06:12.568 real 0m0.243s 00:06:12.568 user 0m0.154s 00:06:12.568 sys 0m0.132s 00:06:12.568 12:14:34 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.568 12:14:34 version -- common/autotest_common.sh@10 -- # set +x 00:06:12.568 ************************************ 00:06:12.568 END TEST version 00:06:12.568 ************************************ 00:06:12.568 12:14:34 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:12.568 12:14:34 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:12.568 12:14:34 -- spdk/autotest.sh@194 -- # uname -s 00:06:12.568 12:14:34 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:12.568 12:14:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:12.568 12:14:34 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:12.568 12:14:34 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:12.568 12:14:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:12.568 12:14:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:12.568 12:14:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.568 12:14:34 -- common/autotest_common.sh@10 -- # set +x 00:06:12.568 12:14:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:12.568 12:14:34 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:12.568 12:14:34 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:12.568 12:14:34 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:12.568 12:14:34 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:12.568 12:14:34 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:12.568 12:14:34 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.568 12:14:34 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:12.568 12:14:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.568 12:14:34 -- common/autotest_common.sh@10 -- # set +x 00:06:12.568 ************************************ 00:06:12.568 START TEST nvmf_tcp 00:06:12.568 ************************************ 00:06:12.568 12:14:34 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.827 * Looking for test storage... 00:06:12.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.827 12:14:34 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.827 --rc genhtml_branch_coverage=1 00:06:12.827 --rc genhtml_function_coverage=1 00:06:12.827 --rc genhtml_legend=1 00:06:12.827 --rc geninfo_all_blocks=1 00:06:12.827 --rc geninfo_unexecuted_blocks=1 00:06:12.827 00:06:12.827 ' 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.827 --rc genhtml_branch_coverage=1 00:06:12.827 --rc genhtml_function_coverage=1 00:06:12.827 --rc genhtml_legend=1 00:06:12.827 --rc geninfo_all_blocks=1 00:06:12.827 --rc geninfo_unexecuted_blocks=1 00:06:12.827 00:06:12.827 ' 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.827 --rc genhtml_branch_coverage=1 00:06:12.827 --rc genhtml_function_coverage=1 00:06:12.827 --rc genhtml_legend=1 00:06:12.827 --rc geninfo_all_blocks=1 00:06:12.827 --rc geninfo_unexecuted_blocks=1 00:06:12.827 00:06:12.827 ' 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.827 --rc genhtml_branch_coverage=1 00:06:12.827 --rc genhtml_function_coverage=1 00:06:12.827 --rc genhtml_legend=1 00:06:12.827 --rc geninfo_all_blocks=1 00:06:12.827 --rc geninfo_unexecuted_blocks=1 00:06:12.827 00:06:12.827 ' 00:06:12.827 12:14:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:12.827 12:14:34 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:12.827 12:14:34 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.827 12:14:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.827 ************************************ 00:06:12.827 START TEST nvmf_target_core 00:06:12.827 ************************************ 00:06:12.827 12:14:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:12.827 * Looking for test storage... 00:06:13.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:06:13.097 12:14:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.097 --rc genhtml_branch_coverage=1 00:06:13.097 --rc genhtml_function_coverage=1 00:06:13.097 --rc genhtml_legend=1 00:06:13.097 --rc geninfo_all_blocks=1 00:06:13.097 --rc geninfo_unexecuted_blocks=1 00:06:13.097 00:06:13.097 ' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.097 --rc genhtml_branch_coverage=1 00:06:13.097 --rc genhtml_function_coverage=1 00:06:13.097 --rc genhtml_legend=1 00:06:13.097 --rc geninfo_all_blocks=1 00:06:13.097 --rc geninfo_unexecuted_blocks=1 00:06:13.097 00:06:13.097 ' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.097 --rc genhtml_branch_coverage=1 00:06:13.097 --rc genhtml_function_coverage=1 00:06:13.097 --rc genhtml_legend=1 00:06:13.097 --rc geninfo_all_blocks=1 00:06:13.097 --rc geninfo_unexecuted_blocks=1 00:06:13.097 00:06:13.097 ' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.097 --rc genhtml_branch_coverage=1 00:06:13.097 --rc genhtml_function_coverage=1 00:06:13.097 --rc genhtml_legend=1 00:06:13.097 --rc geninfo_all_blocks=1 00:06:13.097 --rc geninfo_unexecuted_blocks=1 00:06:13.097 00:06:13.097 ' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:13.097 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.098 12:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:13.098 ************************************ 00:06:13.098 START TEST nvmf_abort 00:06:13.098 ************************************ 00:06:13.098 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:13.098 * Looking for test storage... 00:06:13.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:06:13.098 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:13.098 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:13.098 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.357 --rc genhtml_branch_coverage=1 00:06:13.357 --rc genhtml_function_coverage=1 00:06:13.357 --rc genhtml_legend=1 00:06:13.357 --rc geninfo_all_blocks=1 00:06:13.357 --rc geninfo_unexecuted_blocks=1 00:06:13.357 00:06:13.357 ' 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.357 --rc genhtml_branch_coverage=1 00:06:13.357 --rc genhtml_function_coverage=1 00:06:13.357 --rc genhtml_legend=1 00:06:13.357 --rc geninfo_all_blocks=1 00:06:13.357 --rc geninfo_unexecuted_blocks=1 00:06:13.357 00:06:13.357 ' 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.357 --rc genhtml_branch_coverage=1 00:06:13.357 --rc genhtml_function_coverage=1 00:06:13.357 --rc genhtml_legend=1 00:06:13.357 --rc geninfo_all_blocks=1 00:06:13.357 --rc geninfo_unexecuted_blocks=1 00:06:13.357 00:06:13.357 ' 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.357 --rc genhtml_branch_coverage=1 00:06:13.357 --rc genhtml_function_coverage=1 00:06:13.357 --rc genhtml_legend=1 00:06:13.357 --rc geninfo_all_blocks=1 00:06:13.357 --rc geninfo_unexecuted_blocks=1 00:06:13.357 00:06:13.357 ' 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.357 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:13.358 12:14:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:19.926 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:19.927 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:19.927 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:19.927 Found net devices under 0000:86:00.0: cvl_0_0 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:19.927 Found net devices under 0000:86:00.1: cvl_0_1 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:19.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:06:19.927 00:06:19.927 --- 10.0.0.2 ping statistics --- 00:06:19.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.927 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:06:19.927 00:06:19.927 --- 10.0.0.1 ping statistics --- 00:06:19.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.927 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1452444 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1452444 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1452444 ']' 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.927 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 [2024-12-10 12:14:41.462746] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:19.928 [2024-12-10 12:14:41.462796] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.928 [2024-12-10 12:14:41.541387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.928 [2024-12-10 12:14:41.584688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.928 [2024-12-10 12:14:41.584724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.928 [2024-12-10 12:14:41.584733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.928 [2024-12-10 12:14:41.584738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.928 [2024-12-10 12:14:41.584743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.928 [2024-12-10 12:14:41.586200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.928 [2024-12-10 12:14:41.586308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.928 [2024-12-10 12:14:41.586308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 [2024-12-10 12:14:41.723368] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 Malloc0 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 Delay0 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 [2024-12-10 12:14:41.802037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.928 12:14:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:19.928 [2024-12-10 12:14:41.980302] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:22.462 Initializing NVMe Controllers 00:06:22.462 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:22.462 controller IO queue size 128 less than required 00:06:22.462 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:22.462 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:22.462 Initialization complete. Launching workers. 00:06:22.462 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36550 00:06:22.462 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36611, failed to submit 62 00:06:22.462 success 36554, unsuccessful 57, failed 0 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:22.462 rmmod nvme_tcp 00:06:22.462 rmmod nvme_fabrics 00:06:22.462 rmmod nvme_keyring 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1452444 ']' 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1452444 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1452444 ']' 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1452444 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1452444 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1452444' 00:06:22.462 killing process with pid 1452444 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1452444 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1452444 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.462 12:14:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.366 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:24.366 00:06:24.366 real 0m11.319s 00:06:24.366 user 0m11.945s 00:06:24.366 sys 0m5.480s 00:06:24.366 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.366 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:24.366 ************************************ 00:06:24.366 END TEST nvmf_abort 00:06:24.366 ************************************ 00:06:24.366 12:14:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:24.366 12:14:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.366 12:14:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.366 12:14:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:24.625 ************************************ 00:06:24.625 START TEST nvmf_ns_hotplug_stress 00:06:24.625 ************************************ 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:24.625 * Looking for test storage... 00:06:24.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.625 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.626 --rc genhtml_branch_coverage=1 00:06:24.626 --rc genhtml_function_coverage=1 00:06:24.626 --rc genhtml_legend=1 00:06:24.626 --rc geninfo_all_blocks=1 00:06:24.626 --rc geninfo_unexecuted_blocks=1 00:06:24.626 00:06:24.626 ' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.626 --rc genhtml_branch_coverage=1 00:06:24.626 --rc genhtml_function_coverage=1 00:06:24.626 --rc genhtml_legend=1 00:06:24.626 --rc geninfo_all_blocks=1 00:06:24.626 --rc geninfo_unexecuted_blocks=1 00:06:24.626 00:06:24.626 ' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.626 --rc genhtml_branch_coverage=1 00:06:24.626 --rc genhtml_function_coverage=1 00:06:24.626 --rc genhtml_legend=1 00:06:24.626 --rc geninfo_all_blocks=1 00:06:24.626 --rc geninfo_unexecuted_blocks=1 00:06:24.626 00:06:24.626 ' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.626 --rc genhtml_branch_coverage=1 00:06:24.626 --rc genhtml_function_coverage=1 00:06:24.626 --rc genhtml_legend=1 00:06:24.626 --rc geninfo_all_blocks=1 00:06:24.626 --rc geninfo_unexecuted_blocks=1 00:06:24.626 00:06:24.626 ' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:24.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:24.626 12:14:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:31.196 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:31.196 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:31.196 Found net devices under 0000:86:00.0: cvl_0_0 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:31.196 Found net devices under 0000:86:00.1: cvl_0_1 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:31.196 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:31.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:06:31.197 00:06:31.197 --- 10.0.0.2 ping statistics --- 00:06:31.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.197 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:31.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:06:31.197 00:06:31.197 --- 10.0.0.1 ping statistics --- 00:06:31.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.197 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1456518 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1456518 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1456518 ']' 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.197 12:14:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.197 [2024-12-10 12:14:52.884005] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:06:31.197 [2024-12-10 12:14:52.884055] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.197 [2024-12-10 12:14:52.964286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.197 [2024-12-10 12:14:53.003903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.197 [2024-12-10 12:14:53.003939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.197 [2024-12-10 12:14:53.003946] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.197 [2024-12-10 12:14:53.003952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.197 [2024-12-10 12:14:53.003957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.197 [2024-12-10 12:14:53.005422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.197 [2024-12-10 12:14:53.005528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.197 [2024-12-10 12:14:53.005529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.197 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.197 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:31.197 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:31.197 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.197 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.197 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.197 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:31.197 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:31.197 [2024-12-10 12:14:53.315095] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.197 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:31.456 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:31.714 [2024-12-10 12:14:53.720527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.714 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:31.972 12:14:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:32.231 Malloc0 00:06:32.231 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:32.231 Delay0 00:06:32.231 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.490 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:32.749 NULL1 00:06:32.749 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:33.007 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:33.007 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1456789 00:06:33.007 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:33.007 12:14:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.384 Read completed with error (sct=0, sc=11) 00:06:34.384 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.384 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:34.384 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:34.664 true 00:06:34.664 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:34.664 12:14:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.277 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.560 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:35.560 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:35.818 true 00:06:35.818 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:35.818 12:14:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.076 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.076 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:36.077 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:36.335 true 00:06:36.335 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:36.335 12:14:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.271 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.271 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.529 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:37.529 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:37.788 true 00:06:37.788 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:37.788 12:14:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.723 12:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.723 12:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:38.723 12:15:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:38.982 true 00:06:38.982 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:38.982 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.240 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.499 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:39.499 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:39.757 true 00:06:39.757 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:39.757 12:15:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.692 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.951 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:40.951 12:15:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:41.209 true 00:06:41.209 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:41.209 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.777 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.038 12:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.038 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.038 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:42.038 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:42.297 true 00:06:42.297 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:42.297 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.556 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.815 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:42.815 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:42.815 true 00:06:42.815 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:42.815 12:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.192 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.192 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.192 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:44.192 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:44.454 true 00:06:44.454 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:44.454 12:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.389 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.648 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:45.648 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:45.648 true 00:06:45.648 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:45.648 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.906 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.164 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:46.164 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:46.423 true 00:06:46.423 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:46.423 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.359 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.618 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:47.618 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:47.876 true 00:06:47.876 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:47.876 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.811 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.811 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.811 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:48.811 12:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:49.069 true 00:06:49.069 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:49.069 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.328 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.328 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:49.328 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:49.587 true 00:06:49.587 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:49.587 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.964 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.964 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:50.964 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:51.223 true 00:06:51.223 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:51.223 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.159 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.159 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:52.159 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:52.418 true 00:06:52.418 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:52.418 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.677 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.677 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:52.677 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:52.936 true 00:06:52.936 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:52.936 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.871 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.129 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.129 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:54.129 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:54.387 true 00:06:54.387 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:54.387 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.646 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.904 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:54.904 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:54.904 true 00:06:54.904 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:54.904 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.280 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.280 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:56.280 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:56.545 true 00:06:56.545 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:56.545 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.481 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.481 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:57.481 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:57.740 true 00:06:57.740 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:57.740 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.998 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.257 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:58.257 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:58.257 true 00:06:58.257 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:58.257 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.634 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.634 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:59.634 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:59.634 true 00:06:59.634 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:06:59.634 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.892 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.151 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:00.151 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:00.409 true 00:07:00.409 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:07:00.409 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.345 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.603 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:01.603 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:01.861 true 00:07:01.861 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:07:01.861 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.797 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.797 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:02.798 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:03.054 true 00:07:03.054 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:07:03.054 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.312 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.312 Initializing NVMe Controllers 00:07:03.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:03.312 Controller IO queue size 128, less than required. 00:07:03.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:03.312 Controller IO queue size 128, less than required. 00:07:03.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:03.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:03.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:03.312 Initialization complete. Launching workers. 00:07:03.312 ======================================================== 00:07:03.312 Latency(us) 00:07:03.312 Device Information : IOPS MiB/s Average min max 00:07:03.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1881.40 0.92 46665.26 2198.15 1023572.90 00:07:03.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17630.33 8.61 7259.78 2367.71 455847.01 00:07:03.312 ======================================================== 00:07:03.312 Total : 19511.73 9.53 11059.41 2198.15 1023572.90 00:07:03.312 00:07:03.312 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:03.312 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:03.570 true 00:07:03.570 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1456789 00:07:03.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1456789) - No such process 00:07:03.570 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1456789 00:07:03.570 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.828 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:04.087 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:04.087 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:04.087 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:04.087 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.087 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:04.087 null0 00:07:04.087 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.087 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.087 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:04.346 null1 00:07:04.346 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.346 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.346 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:04.605 null2 00:07:04.605 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.605 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.605 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:04.863 null3 00:07:04.863 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:04.863 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:04.863 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:04.864 null4 00:07:05.122 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.122 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.122 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:05.122 null5 00:07:05.122 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.122 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.122 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:05.381 null6 00:07:05.381 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.381 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.381 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:05.641 null7 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:05.641 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1462912 1462914 1462915 1462917 1462919 1462921 1462923 1462925 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.642 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:05.901 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.901 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.901 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:05.901 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:05.901 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:05.901 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.901 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:05.901 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:05.901 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.160 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.161 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.420 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.420 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.420 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.420 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.421 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.679 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:06.679 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.680 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:06.680 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:06.680 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:06.680 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:06.680 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.680 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:06.939 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.198 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.478 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.478 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.478 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.478 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.478 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.478 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.478 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.478 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.737 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.996 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.996 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.996 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.996 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.996 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.996 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.996 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.996 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.255 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.255 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.256 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.515 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.774 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.774 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.774 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.775 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.775 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.775 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.775 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.775 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.034 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.293 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.293 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.293 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.293 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.293 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.293 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.293 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.293 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.293 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.293 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.294 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.552 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.553 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.553 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:09.811 rmmod nvme_tcp 00:07:09.811 rmmod nvme_fabrics 00:07:09.811 rmmod nvme_keyring 00:07:09.811 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:09.812 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:09.812 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:09.812 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1456518 ']' 00:07:09.812 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1456518 00:07:09.812 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1456518 ']' 00:07:09.812 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1456518 00:07:09.812 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:09.812 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.812 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1456518 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1456518' 00:07:10.071 killing process with pid 1456518 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1456518 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1456518 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.071 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.609 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:12.610 00:07:12.610 real 0m47.713s 00:07:12.610 user 3m13.363s 00:07:12.610 sys 0m15.473s 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:12.610 ************************************ 00:07:12.610 END TEST nvmf_ns_hotplug_stress 00:07:12.610 ************************************ 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.610 ************************************ 00:07:12.610 START TEST nvmf_delete_subsystem 00:07:12.610 ************************************ 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:12.610 * Looking for test storage... 00:07:12.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:12.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.610 --rc genhtml_branch_coverage=1 00:07:12.610 --rc genhtml_function_coverage=1 00:07:12.610 --rc genhtml_legend=1 00:07:12.610 --rc geninfo_all_blocks=1 00:07:12.610 --rc geninfo_unexecuted_blocks=1 00:07:12.610 00:07:12.610 ' 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:12.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.610 --rc genhtml_branch_coverage=1 00:07:12.610 --rc genhtml_function_coverage=1 00:07:12.610 --rc genhtml_legend=1 00:07:12.610 --rc geninfo_all_blocks=1 00:07:12.610 --rc geninfo_unexecuted_blocks=1 00:07:12.610 00:07:12.610 ' 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:12.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.610 --rc genhtml_branch_coverage=1 00:07:12.610 --rc genhtml_function_coverage=1 00:07:12.610 --rc genhtml_legend=1 00:07:12.610 --rc geninfo_all_blocks=1 00:07:12.610 --rc geninfo_unexecuted_blocks=1 00:07:12.610 00:07:12.610 ' 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:12.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.610 --rc genhtml_branch_coverage=1 00:07:12.610 --rc genhtml_function_coverage=1 00:07:12.610 --rc genhtml_legend=1 00:07:12.610 --rc geninfo_all_blocks=1 00:07:12.610 --rc geninfo_unexecuted_blocks=1 00:07:12.610 00:07:12.610 ' 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.610 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:12.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:12.611 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:19.183 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.183 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:19.184 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:19.184 Found net devices under 0000:86:00.0: cvl_0_0 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:19.184 Found net devices under 0000:86:00.1: cvl_0_1 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:19.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:07:19.184 00:07:19.184 --- 10.0.0.2 ping statistics --- 00:07:19.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.184 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:07:19.184 00:07:19.184 --- 10.0.0.1 ping statistics --- 00:07:19.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.184 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.184 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1467313 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1467313 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1467313 ']' 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 [2024-12-10 12:15:40.608500] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:19.185 [2024-12-10 12:15:40.608552] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.185 [2024-12-10 12:15:40.688485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.185 [2024-12-10 12:15:40.729373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.185 [2024-12-10 12:15:40.729409] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.185 [2024-12-10 12:15:40.729416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.185 [2024-12-10 12:15:40.729422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.185 [2024-12-10 12:15:40.729428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.185 [2024-12-10 12:15:40.730645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.185 [2024-12-10 12:15:40.730647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 [2024-12-10 12:15:40.872197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 [2024-12-10 12:15:40.892397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 NULL1 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 Delay0 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1467545 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:19.185 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:19.186 [2024-12-10 12:15:41.003326] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:21.161 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:21.161 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.161 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 starting I/O failed: -6 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 starting I/O failed: -6 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 starting I/O failed: -6 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 starting I/O failed: -6 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 starting I/O failed: -6 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 starting I/O failed: -6 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 starting I/O failed: -6 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 starting I/O failed: -6 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 Write completed with error (sct=0, sc=8) 00:07:21.161 starting I/O failed: -6 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.161 Read completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 [2024-12-10 12:15:43.041587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14462c0 is same with the state(6) to be set 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 [2024-12-10 12:15:43.041907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14464a0 is same with the state(6) to be set 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 [2024-12-10 12:15:43.042109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446860 is same with the state(6) to be set 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Read completed with error (sct=0, sc=8) 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 starting I/O failed: -6 00:07:21.162 Write completed with error (sct=0, sc=8) 00:07:21.162 [2024-12-10 12:15:43.042632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff88800d510 is same with the state(6) to be set 00:07:22.096 [2024-12-10 12:15:44.015529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14479b0 is same with the state(6) to be set 00:07:22.096 Read completed with error (sct=0, sc=8) 00:07:22.096 Write completed with error (sct=0, sc=8) 00:07:22.096 Read completed with error (sct=0, sc=8) 00:07:22.096 Read completed with error (sct=0, sc=8) 00:07:22.096 Write completed with error (sct=0, sc=8) 00:07:22.096 Read completed with error (sct=0, sc=8) 00:07:22.096 Read completed with error (sct=0, sc=8) 00:07:22.096 Write completed with error (sct=0, sc=8) 00:07:22.096 Read completed with error (sct=0, sc=8) 00:07:22.096 Write completed with error (sct=0, sc=8) 00:07:22.096 Read completed with error (sct=0, sc=8) 00:07:22.096 Read completed with error (sct=0, sc=8) 00:07:22.096 Read completed with error (sct=0, sc=8) 00:07:22.096 Read completed with error (sct=0, sc=8) 00:07:22.096 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 [2024-12-10 12:15:44.043651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff88800d060 is same with the state(6) to be set 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 [2024-12-10 12:15:44.043791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff88800d840 is same with the state(6) to be set 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 [2024-12-10 12:15:44.044898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1446680 is same with the state(6) to be set 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 Read completed with error (sct=0, sc=8) 00:07:22.097 Write completed with error (sct=0, sc=8) 00:07:22.097 [2024-12-10 12:15:44.045710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff888000c80 is same with the state(6) to be set 00:07:22.097 Initializing NVMe Controllers 00:07:22.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:22.097 Controller IO queue size 128, less than required. 00:07:22.097 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:22.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:22.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:22.097 Initialization complete. Launching workers. 00:07:22.097 ======================================================== 00:07:22.097 Latency(us) 00:07:22.097 Device Information : IOPS MiB/s Average min max 00:07:22.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.08 0.08 892222.29 323.88 2002304.71 00:07:22.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.07 0.08 1090094.61 2085.88 2001547.66 00:07:22.097 ======================================================== 00:07:22.097 Total : 311.15 0.15 991474.54 323.88 2002304.71 00:07:22.097 00:07:22.097 [2024-12-10 12:15:44.046298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14479b0 (9): Bad file descriptor 00:07:22.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:22.097 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.097 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:22.097 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1467545 00:07:22.097 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1467545 00:07:22.664 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1467545) - No such process 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1467545 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1467545 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1467545 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.664 [2024-12-10 12:15:44.576809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1468032 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1468032 00:07:22.664 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:22.664 [2024-12-10 12:15:44.664915] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:23.230 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.230 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1468032 00:07:23.230 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.491 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.491 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1468032 00:07:23.491 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.057 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.057 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1468032 00:07:24.057 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.623 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.623 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1468032 00:07:24.623 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.189 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.189 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1468032 00:07:25.189 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.755 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.755 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1468032 00:07:25.755 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.755 Initializing NVMe Controllers 00:07:25.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:25.755 Controller IO queue size 128, less than required. 00:07:25.755 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:25.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:25.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:25.755 Initialization complete. Launching workers. 00:07:25.755 ======================================================== 00:07:25.755 Latency(us) 00:07:25.755 Device Information : IOPS MiB/s Average min max 00:07:25.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002467.56 1000132.67 1008814.15 00:07:25.755 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004822.26 1000158.99 1041988.20 00:07:25.755 ======================================================== 00:07:25.755 Total : 256.00 0.12 1003644.91 1000132.67 1041988.20 00:07:25.755 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1468032 00:07:26.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1468032) - No such process 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1468032 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:26.013 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:26.013 rmmod nvme_tcp 00:07:26.013 rmmod nvme_fabrics 00:07:26.013 rmmod nvme_keyring 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1467313 ']' 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1467313 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1467313 ']' 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1467313 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1467313 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1467313' 00:07:26.273 killing process with pid 1467313 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1467313 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1467313 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.273 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:28.809 00:07:28.809 real 0m16.135s 00:07:28.809 user 0m29.000s 00:07:28.809 sys 0m5.511s 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.809 ************************************ 00:07:28.809 END TEST nvmf_delete_subsystem 00:07:28.809 ************************************ 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.809 ************************************ 00:07:28.809 START TEST nvmf_host_management 00:07:28.809 ************************************ 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:28.809 * Looking for test storage... 00:07:28.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:28.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.809 --rc genhtml_branch_coverage=1 00:07:28.809 --rc genhtml_function_coverage=1 00:07:28.809 --rc genhtml_legend=1 00:07:28.809 --rc geninfo_all_blocks=1 00:07:28.809 --rc geninfo_unexecuted_blocks=1 00:07:28.809 00:07:28.809 ' 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:28.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.809 --rc genhtml_branch_coverage=1 00:07:28.809 --rc genhtml_function_coverage=1 00:07:28.809 --rc genhtml_legend=1 00:07:28.809 --rc geninfo_all_blocks=1 00:07:28.809 --rc geninfo_unexecuted_blocks=1 00:07:28.809 00:07:28.809 ' 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:28.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.809 --rc genhtml_branch_coverage=1 00:07:28.809 --rc genhtml_function_coverage=1 00:07:28.809 --rc genhtml_legend=1 00:07:28.809 --rc geninfo_all_blocks=1 00:07:28.809 --rc geninfo_unexecuted_blocks=1 00:07:28.809 00:07:28.809 ' 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:28.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.809 --rc genhtml_branch_coverage=1 00:07:28.809 --rc genhtml_function_coverage=1 00:07:28.809 --rc genhtml_legend=1 00:07:28.809 --rc geninfo_all_blocks=1 00:07:28.809 --rc geninfo_unexecuted_blocks=1 00:07:28.809 00:07:28.809 ' 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.809 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:28.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:28.810 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.381 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.381 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.381 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.381 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.381 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.381 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.381 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.381 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:35.382 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:35.382 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:35.382 Found net devices under 0000:86:00.0: cvl_0_0 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:35.382 Found net devices under 0000:86:00.1: cvl_0_1 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:07:35.382 00:07:35.382 --- 10.0.0.2 ping statistics --- 00:07:35.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.382 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:07:35.382 00:07:35.382 --- 10.0.0.1 ping statistics --- 00:07:35.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.382 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.382 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1472261 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1472261 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1472261 ']' 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.383 12:15:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.383 [2024-12-10 12:15:56.837954] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:35.383 [2024-12-10 12:15:56.838001] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.383 [2024-12-10 12:15:56.915883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.383 [2024-12-10 12:15:56.955995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.383 [2024-12-10 12:15:56.956031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.383 [2024-12-10 12:15:56.956038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.383 [2024-12-10 12:15:56.956043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.383 [2024-12-10 12:15:56.956048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.383 [2024-12-10 12:15:56.957661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.383 [2024-12-10 12:15:56.957776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.383 [2024-12-10 12:15:56.957886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.383 [2024-12-10 12:15:56.957888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.383 [2024-12-10 12:15:57.107692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.383 Malloc0 00:07:35.383 [2024-12-10 12:15:57.180183] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1472315 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1472315 /var/tmp/bdevperf.sock 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1472315 ']' 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:35.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:35.383 { 00:07:35.383 "params": { 00:07:35.383 "name": "Nvme$subsystem", 00:07:35.383 "trtype": "$TEST_TRANSPORT", 00:07:35.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:35.383 "adrfam": "ipv4", 00:07:35.383 "trsvcid": "$NVMF_PORT", 00:07:35.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:35.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:35.383 "hdgst": ${hdgst:-false}, 00:07:35.383 "ddgst": ${ddgst:-false} 00:07:35.383 }, 00:07:35.383 "method": "bdev_nvme_attach_controller" 00:07:35.383 } 00:07:35.383 EOF 00:07:35.383 )") 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:35.383 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:35.383 "params": { 00:07:35.383 "name": "Nvme0", 00:07:35.383 "trtype": "tcp", 00:07:35.383 "traddr": "10.0.0.2", 00:07:35.383 "adrfam": "ipv4", 00:07:35.383 "trsvcid": "4420", 00:07:35.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:35.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:35.383 "hdgst": false, 00:07:35.383 "ddgst": false 00:07:35.383 }, 00:07:35.383 "method": "bdev_nvme_attach_controller" 00:07:35.383 }' 00:07:35.383 [2024-12-10 12:15:57.277804] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:35.383 [2024-12-10 12:15:57.277850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472315 ] 00:07:35.383 [2024-12-10 12:15:57.339053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.383 [2024-12-10 12:15:57.380572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.642 Running I/O for 10 seconds... 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:35.642 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:35.643 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.643 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:35.643 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.643 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:07:35.643 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:07:35.643 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:35.901 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:35.901 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:35.901 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:35.901 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:35.901 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.901 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.162 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.162 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:07:36.162 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:07:36.162 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:36.162 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:36.162 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:36.162 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:36.162 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.162 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.162 [2024-12-10 12:15:58.103281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103529] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.162 [2024-12-10 12:15:58.103697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.163 [2024-12-10 12:15:58.103703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123aed0 is same with the state(6) to be set 00:07:36.163 [2024-12-10 12:15:58.103825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.103857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.103875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.103883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.103891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.103898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.103907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.103913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.103922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.103931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.103940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.103946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.103955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.103961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.103969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.103976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.103984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.103990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.103998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.163 [2024-12-10 12:15:58.104463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.163 [2024-12-10 12:15:58.104469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:36.164 [2024-12-10 12:15:58.104828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.104836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b2120 is same with the state(6) to be set 00:07:36.164 [2024-12-10 12:15:58.105811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:36.164 task offset: 90112 on job bdev=Nvme0n1 fails 00:07:36.164 00:07:36.164 Latency(us) 00:07:36.164 [2024-12-10T11:15:58.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.164 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:36.164 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:36.164 Verification LBA range: start 0x0 length 0x400 00:07:36.164 Nvme0n1 : 0.40 1769.05 110.57 160.82 0.00 32270.28 4302.58 27582.11 00:07:36.164 [2024-12-10T11:15:58.332Z] =================================================================================================================== 00:07:36.164 [2024-12-10T11:15:58.332Z] Total : 1769.05 110.57 160.82 0.00 32270.28 4302.58 27582.11 00:07:36.164 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.164 [2024-12-10 12:15:58.108227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.164 [2024-12-10 12:15:58.108256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13991a0 (9): Bad file descriptor 00:07:36.164 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:36.164 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.164 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.164 [2024-12-10 12:15:58.112992] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:36.164 [2024-12-10 12:15:58.113062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:36.164 [2024-12-10 12:15:58.113085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:36.164 [2024-12-10 12:15:58.113097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:36.164 [2024-12-10 12:15:58.113105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:36.164 [2024-12-10 12:15:58.113112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:36.164 [2024-12-10 12:15:58.113118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13991a0 00:07:36.164 [2024-12-10 12:15:58.113141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13991a0 (9): Bad file descriptor 00:07:36.164 [2024-12-10 12:15:58.113153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:36.164 [2024-12-10 12:15:58.113165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:36.164 [2024-12-10 12:15:58.113174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:36.164 [2024-12-10 12:15:58.113182] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:36.164 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.164 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1472315 00:07:37.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1472315) - No such process 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:37.101 { 00:07:37.101 "params": { 00:07:37.101 "name": "Nvme$subsystem", 00:07:37.101 "trtype": "$TEST_TRANSPORT", 00:07:37.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:37.101 "adrfam": "ipv4", 00:07:37.101 "trsvcid": "$NVMF_PORT", 00:07:37.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:37.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:37.101 "hdgst": ${hdgst:-false}, 00:07:37.101 "ddgst": ${ddgst:-false} 00:07:37.101 }, 00:07:37.101 "method": "bdev_nvme_attach_controller" 00:07:37.101 } 00:07:37.101 EOF 00:07:37.101 )") 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:37.101 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:37.101 "params": { 00:07:37.101 "name": "Nvme0", 00:07:37.101 "trtype": "tcp", 00:07:37.101 "traddr": "10.0.0.2", 00:07:37.101 "adrfam": "ipv4", 00:07:37.101 "trsvcid": "4420", 00:07:37.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:37.101 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:37.101 "hdgst": false, 00:07:37.101 "ddgst": false 00:07:37.101 }, 00:07:37.101 "method": "bdev_nvme_attach_controller" 00:07:37.101 }' 00:07:37.101 [2024-12-10 12:15:59.172770] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:37.102 [2024-12-10 12:15:59.172816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472782 ] 00:07:37.102 [2024-12-10 12:15:59.250038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.360 [2024-12-10 12:15:59.289473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.619 Running I/O for 1 seconds... 00:07:38.557 1984.00 IOPS, 124.00 MiB/s 00:07:38.557 Latency(us) 00:07:38.557 [2024-12-10T11:16:00.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.558 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:38.558 Verification LBA range: start 0x0 length 0x400 00:07:38.558 Nvme0n1 : 1.03 1995.20 124.70 0.00 0.00 31573.60 6183.18 27924.03 00:07:38.558 [2024-12-10T11:16:00.726Z] =================================================================================================================== 00:07:38.558 [2024-12-10T11:16:00.726Z] Total : 1995.20 124.70 0.00 0.00 31573.60 6183.18 27924.03 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:38.817 rmmod nvme_tcp 00:07:38.817 rmmod nvme_fabrics 00:07:38.817 rmmod nvme_keyring 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1472261 ']' 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1472261 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1472261 ']' 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1472261 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1472261 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1472261' 00:07:38.817 killing process with pid 1472261 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1472261 00:07:38.817 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1472261 00:07:39.076 [2024-12-10 12:16:01.091218] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:39.076 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:41.614 00:07:41.614 real 0m12.633s 00:07:41.614 user 0m20.765s 00:07:41.614 sys 0m5.579s 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 ************************************ 00:07:41.614 END TEST nvmf_host_management 00:07:41.614 ************************************ 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.614 ************************************ 00:07:41.614 START TEST nvmf_lvol 00:07:41.614 ************************************ 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:41.614 * Looking for test storage... 00:07:41.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:41.614 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:41.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.615 --rc genhtml_branch_coverage=1 00:07:41.615 --rc genhtml_function_coverage=1 00:07:41.615 --rc genhtml_legend=1 00:07:41.615 --rc geninfo_all_blocks=1 00:07:41.615 --rc geninfo_unexecuted_blocks=1 00:07:41.615 00:07:41.615 ' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:41.615 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:41.616 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.616 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.616 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.616 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:41.616 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:41.616 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.616 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:48.188 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:48.188 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:48.188 Found net devices under 0000:86:00.0: cvl_0_0 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:48.188 Found net devices under 0000:86:00.1: cvl_0_1 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.188 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:07:48.188 00:07:48.188 --- 10.0.0.2 ping statistics --- 00:07:48.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.189 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:07:48.189 00:07:48.189 --- 10.0.0.1 ping statistics --- 00:07:48.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.189 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1476555 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1476555 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1476555 ']' 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.189 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.189 [2024-12-10 12:16:09.529790] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:07:48.189 [2024-12-10 12:16:09.529841] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.189 [2024-12-10 12:16:09.613328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.189 [2024-12-10 12:16:09.655143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.189 [2024-12-10 12:16:09.655182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.189 [2024-12-10 12:16:09.655191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.189 [2024-12-10 12:16:09.655197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.189 [2024-12-10 12:16:09.655202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.189 [2024-12-10 12:16:09.656501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.189 [2024-12-10 12:16:09.656541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.189 [2024-12-10 12:16:09.656541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.447 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.447 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:48.447 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.447 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.447 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.447 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.447 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:48.447 [2024-12-10 12:16:10.576659] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.447 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:48.706 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:48.706 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:48.964 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:48.964 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:49.222 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:49.479 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=771e2d22-b7d4-4920-b32f-a3abef149b59 00:07:49.479 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 771e2d22-b7d4-4920-b32f-a3abef149b59 lvol 20 00:07:49.737 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=29c30403-e48e-4189-b913-b7fd243e98f5 00:07:49.737 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.737 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 29c30403-e48e-4189-b913-b7fd243e98f5 00:07:49.995 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:50.252 [2024-12-10 12:16:12.249271] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.252 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:50.510 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1477058 00:07:50.510 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:50.510 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:51.443 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_snapshot 29c30403-e48e-4189-b913-b7fd243e98f5 MY_SNAPSHOT 00:07:51.702 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7805133d-e3b9-4ece-84cd-b0c2c28a7e33 00:07:51.702 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_resize 29c30403-e48e-4189-b913-b7fd243e98f5 30 00:07:51.959 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_clone 7805133d-e3b9-4ece-84cd-b0c2c28a7e33 MY_CLONE 00:07:52.218 12:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0449243e-ee29-44b8-a8b3-fea99e78a0c6 00:07:52.218 12:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_inflate 0449243e-ee29-44b8-a8b3-fea99e78a0c6 00:07:52.785 12:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1477058 00:08:00.969 Initializing NVMe Controllers 00:08:00.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:00.969 Controller IO queue size 128, less than required. 00:08:00.969 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:00.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:00.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:00.969 Initialization complete. Launching workers. 00:08:00.969 ======================================================== 00:08:00.969 Latency(us) 00:08:00.969 Device Information : IOPS MiB/s Average min max 00:08:00.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12121.60 47.35 10560.99 1672.38 55112.25 00:08:00.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12246.20 47.84 10452.13 1262.10 58364.54 00:08:00.969 ======================================================== 00:08:00.969 Total : 24367.80 95.19 10506.28 1262.10 58364.54 00:08:00.969 00:08:00.969 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:01.228 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 29c30403-e48e-4189-b913-b7fd243e98f5 00:08:01.228 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 771e2d22-b7d4-4920-b32f-a3abef149b59 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.487 rmmod nvme_tcp 00:08:01.487 rmmod nvme_fabrics 00:08:01.487 rmmod nvme_keyring 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1476555 ']' 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1476555 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1476555 ']' 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1476555 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.487 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1476555 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1476555' 00:08:01.746 killing process with pid 1476555 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1476555 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1476555 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.746 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.283 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:04.283 00:08:04.283 real 0m22.689s 00:08:04.283 user 1m5.656s 00:08:04.283 sys 0m7.548s 00:08:04.283 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.283 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:04.283 ************************************ 00:08:04.283 END TEST nvmf_lvol 00:08:04.283 ************************************ 00:08:04.283 12:16:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:04.283 12:16:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:04.283 12:16:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.283 12:16:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:04.283 ************************************ 00:08:04.283 START TEST nvmf_lvs_grow 00:08:04.283 ************************************ 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:04.283 * Looking for test storage... 00:08:04.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.283 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:04.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.284 --rc genhtml_branch_coverage=1 00:08:04.284 --rc genhtml_function_coverage=1 00:08:04.284 --rc genhtml_legend=1 00:08:04.284 --rc geninfo_all_blocks=1 00:08:04.284 --rc geninfo_unexecuted_blocks=1 00:08:04.284 00:08:04.284 ' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:04.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.284 --rc genhtml_branch_coverage=1 00:08:04.284 --rc genhtml_function_coverage=1 00:08:04.284 --rc genhtml_legend=1 00:08:04.284 --rc geninfo_all_blocks=1 00:08:04.284 --rc geninfo_unexecuted_blocks=1 00:08:04.284 00:08:04.284 ' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:04.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.284 --rc genhtml_branch_coverage=1 00:08:04.284 --rc genhtml_function_coverage=1 00:08:04.284 --rc genhtml_legend=1 00:08:04.284 --rc geninfo_all_blocks=1 00:08:04.284 --rc geninfo_unexecuted_blocks=1 00:08:04.284 00:08:04.284 ' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:04.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.284 --rc genhtml_branch_coverage=1 00:08:04.284 --rc genhtml_function_coverage=1 00:08:04.284 --rc genhtml_legend=1 00:08:04.284 --rc geninfo_all_blocks=1 00:08:04.284 --rc geninfo_unexecuted_blocks=1 00:08:04.284 00:08:04.284 ' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:04.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:04.284 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.858 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:10.859 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:10.859 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:10.859 Found net devices under 0000:86:00.0: cvl_0_0 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:10.859 Found net devices under 0000:86:00.1: cvl_0_1 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.859 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:10.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:08:10.859 00:08:10.859 --- 10.0.0.2 ping statistics --- 00:08:10.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.859 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:08:10.859 00:08:10.859 --- 10.0.0.1 ping statistics --- 00:08:10.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.859 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1482618 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1482618 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1482618 ']' 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.859 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:10.859 [2024-12-10 12:16:32.364771] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:10.859 [2024-12-10 12:16:32.364822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.860 [2024-12-10 12:16:32.447134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.860 [2024-12-10 12:16:32.486438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.860 [2024-12-10 12:16:32.486476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.860 [2024-12-10 12:16:32.486483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.860 [2024-12-10 12:16:32.486489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.860 [2024-12-10 12:16:32.486494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.860 [2024-12-10 12:16:32.487039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:10.860 [2024-12-10 12:16:32.790548] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:10.860 ************************************ 00:08:10.860 START TEST lvs_grow_clean 00:08:10.860 ************************************ 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:10.860 12:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.118 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:11.118 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:11.118 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=37e34b12-313c-434f-8536-0168c0673636 00:08:11.118 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37e34b12-313c-434f-8536-0168c0673636 00:08:11.118 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:11.376 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:11.376 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:11.376 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 37e34b12-313c-434f-8536-0168c0673636 lvol 150 00:08:11.634 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ac059ae2-eec1-443d-ba46-b61f583d5e49 00:08:11.634 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:11.634 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:11.892 [2024-12-10 12:16:33.843161] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:11.892 [2024-12-10 12:16:33.843216] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:11.892 true 00:08:11.892 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37e34b12-313c-434f-8536-0168c0673636 00:08:11.892 12:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:11.892 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:11.892 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:12.150 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ac059ae2-eec1-443d-ba46-b61f583d5e49 00:08:12.409 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:12.667 [2024-12-10 12:16:34.601478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.667 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.667 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:12.668 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1482956 00:08:12.668 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.668 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1482956 /var/tmp/bdevperf.sock 00:08:12.668 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1482956 ']' 00:08:12.668 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.668 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.668 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.668 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.668 12:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:12.925 [2024-12-10 12:16:34.846770] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:12.925 [2024-12-10 12:16:34.846816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1482956 ] 00:08:12.925 [2024-12-10 12:16:34.924477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.925 [2024-12-10 12:16:34.965615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.925 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.925 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:12.925 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:13.491 Nvme0n1 00:08:13.491 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:13.491 [ 00:08:13.491 { 00:08:13.491 "name": "Nvme0n1", 00:08:13.491 "aliases": [ 00:08:13.491 "ac059ae2-eec1-443d-ba46-b61f583d5e49" 00:08:13.491 ], 00:08:13.491 "product_name": "NVMe disk", 00:08:13.491 "block_size": 4096, 00:08:13.491 "num_blocks": 38912, 00:08:13.491 "uuid": "ac059ae2-eec1-443d-ba46-b61f583d5e49", 00:08:13.491 "numa_id": 1, 00:08:13.491 "assigned_rate_limits": { 00:08:13.491 "rw_ios_per_sec": 0, 00:08:13.491 "rw_mbytes_per_sec": 0, 00:08:13.491 "r_mbytes_per_sec": 0, 00:08:13.491 "w_mbytes_per_sec": 0 00:08:13.491 }, 00:08:13.491 "claimed": false, 00:08:13.491 "zoned": false, 00:08:13.491 "supported_io_types": { 00:08:13.491 "read": true, 00:08:13.491 "write": true, 00:08:13.491 "unmap": true, 00:08:13.491 "flush": true, 00:08:13.491 "reset": true, 00:08:13.491 "nvme_admin": true, 00:08:13.491 "nvme_io": true, 00:08:13.491 "nvme_io_md": false, 00:08:13.491 "write_zeroes": true, 00:08:13.491 "zcopy": false, 00:08:13.491 "get_zone_info": false, 00:08:13.491 "zone_management": false, 00:08:13.491 "zone_append": false, 00:08:13.491 "compare": true, 00:08:13.491 "compare_and_write": true, 00:08:13.491 "abort": true, 00:08:13.491 "seek_hole": false, 00:08:13.491 "seek_data": false, 00:08:13.491 "copy": true, 00:08:13.491 "nvme_iov_md": false 00:08:13.491 }, 00:08:13.491 "memory_domains": [ 00:08:13.491 { 00:08:13.491 "dma_device_id": "system", 00:08:13.491 "dma_device_type": 1 00:08:13.491 } 00:08:13.491 ], 00:08:13.491 "driver_specific": { 00:08:13.491 "nvme": [ 00:08:13.491 { 00:08:13.491 "trid": { 00:08:13.491 "trtype": "TCP", 00:08:13.491 "adrfam": "IPv4", 00:08:13.491 "traddr": "10.0.0.2", 00:08:13.491 "trsvcid": "4420", 00:08:13.491 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:13.491 }, 00:08:13.491 "ctrlr_data": { 00:08:13.491 "cntlid": 1, 00:08:13.491 "vendor_id": "0x8086", 00:08:13.491 "model_number": "SPDK bdev Controller", 00:08:13.491 "serial_number": "SPDK0", 00:08:13.491 "firmware_revision": "25.01", 00:08:13.491 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.491 "oacs": { 00:08:13.491 "security": 0, 00:08:13.491 "format": 0, 00:08:13.491 "firmware": 0, 00:08:13.491 "ns_manage": 0 00:08:13.491 }, 00:08:13.491 "multi_ctrlr": true, 00:08:13.491 "ana_reporting": false 00:08:13.491 }, 00:08:13.491 "vs": { 00:08:13.491 "nvme_version": "1.3" 00:08:13.491 }, 00:08:13.491 "ns_data": { 00:08:13.491 "id": 1, 00:08:13.491 "can_share": true 00:08:13.491 } 00:08:13.491 } 00:08:13.491 ], 00:08:13.491 "mp_policy": "active_passive" 00:08:13.491 } 00:08:13.491 } 00:08:13.491 ] 00:08:13.750 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1483173 00:08:13.750 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.750 12:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:13.750 Running I/O for 10 seconds... 00:08:14.683 Latency(us) 00:08:14.683 [2024-12-10T11:16:36.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.683 Nvme0n1 : 1.00 22866.00 89.32 0.00 0.00 0.00 0.00 0.00 00:08:14.683 [2024-12-10T11:16:36.851Z] =================================================================================================================== 00:08:14.683 [2024-12-10T11:16:36.851Z] Total : 22866.00 89.32 0.00 0.00 0.00 0.00 0.00 00:08:14.683 00:08:15.618 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 37e34b12-313c-434f-8536-0168c0673636 00:08:15.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.618 Nvme0n1 : 2.00 22948.50 89.64 0.00 0.00 0.00 0.00 0.00 00:08:15.618 [2024-12-10T11:16:37.786Z] =================================================================================================================== 00:08:15.618 [2024-12-10T11:16:37.786Z] Total : 22948.50 89.64 0.00 0.00 0.00 0.00 0.00 00:08:15.618 00:08:15.876 true 00:08:15.876 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37e34b12-313c-434f-8536-0168c0673636 00:08:15.876 12:16:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:16.134 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:16.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:16.135 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1483173 00:08:16.701 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.701 Nvme0n1 : 3.00 23004.00 89.86 0.00 0.00 0.00 0.00 0.00 00:08:16.701 [2024-12-10T11:16:38.869Z] =================================================================================================================== 00:08:16.701 [2024-12-10T11:16:38.869Z] Total : 23004.00 89.86 0.00 0.00 0.00 0.00 0.00 00:08:16.701 00:08:17.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.636 Nvme0n1 : 4.00 23085.50 90.18 0.00 0.00 0.00 0.00 0.00 00:08:17.636 [2024-12-10T11:16:39.804Z] =================================================================================================================== 00:08:17.636 [2024-12-10T11:16:39.804Z] Total : 23085.50 90.18 0.00 0.00 0.00 0.00 0.00 00:08:17.636 00:08:19.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.010 Nvme0n1 : 5.00 23044.40 90.02 0.00 0.00 0.00 0.00 0.00 00:08:19.010 [2024-12-10T11:16:41.178Z] =================================================================================================================== 00:08:19.010 [2024-12-10T11:16:41.178Z] Total : 23044.40 90.02 0.00 0.00 0.00 0.00 0.00 00:08:19.010 00:08:19.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.946 Nvme0n1 : 6.00 23070.67 90.12 0.00 0.00 0.00 0.00 0.00 00:08:19.946 [2024-12-10T11:16:42.114Z] =================================================================================================================== 00:08:19.946 [2024-12-10T11:16:42.114Z] Total : 23070.67 90.12 0.00 0.00 0.00 0.00 0.00 00:08:19.946 00:08:20.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.881 Nvme0n1 : 7.00 23113.71 90.29 0.00 0.00 0.00 0.00 0.00 00:08:20.881 [2024-12-10T11:16:43.049Z] =================================================================================================================== 00:08:20.881 [2024-12-10T11:16:43.049Z] Total : 23113.71 90.29 0.00 0.00 0.00 0.00 0.00 00:08:20.881 00:08:21.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.816 Nvme0n1 : 8.00 23149.50 90.43 0.00 0.00 0.00 0.00 0.00 00:08:21.816 [2024-12-10T11:16:43.984Z] =================================================================================================================== 00:08:21.816 [2024-12-10T11:16:43.984Z] Total : 23149.50 90.43 0.00 0.00 0.00 0.00 0.00 00:08:21.816 00:08:22.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.750 Nvme0n1 : 9.00 23176.44 90.53 0.00 0.00 0.00 0.00 0.00 00:08:22.750 [2024-12-10T11:16:44.918Z] =================================================================================================================== 00:08:22.750 [2024-12-10T11:16:44.918Z] Total : 23176.44 90.53 0.00 0.00 0.00 0.00 0.00 00:08:22.750 00:08:23.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.685 Nvme0n1 : 10.00 23192.30 90.59 0.00 0.00 0.00 0.00 0.00 00:08:23.685 [2024-12-10T11:16:45.853Z] =================================================================================================================== 00:08:23.686 [2024-12-10T11:16:45.854Z] Total : 23192.30 90.59 0.00 0.00 0.00 0.00 0.00 00:08:23.686 00:08:23.686 00:08:23.686 Latency(us) 00:08:23.686 [2024-12-10T11:16:45.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.686 Nvme0n1 : 10.00 23194.66 90.60 0.00 0.00 5515.43 3248.31 11967.44 00:08:23.686 [2024-12-10T11:16:45.854Z] =================================================================================================================== 00:08:23.686 [2024-12-10T11:16:45.854Z] Total : 23194.66 90.60 0.00 0.00 5515.43 3248.31 11967.44 00:08:23.686 { 00:08:23.686 "results": [ 00:08:23.686 { 00:08:23.686 "job": "Nvme0n1", 00:08:23.686 "core_mask": "0x2", 00:08:23.686 "workload": "randwrite", 00:08:23.686 "status": "finished", 00:08:23.686 "queue_depth": 128, 00:08:23.686 "io_size": 4096, 00:08:23.686 "runtime": 10.004499, 00:08:23.686 "iops": 23194.664720342316, 00:08:23.686 "mibps": 90.60415906383717, 00:08:23.686 "io_failed": 0, 00:08:23.686 "io_timeout": 0, 00:08:23.686 "avg_latency_us": 5515.434400586228, 00:08:23.686 "min_latency_us": 3248.3060869565215, 00:08:23.686 "max_latency_us": 11967.44347826087 00:08:23.686 } 00:08:23.686 ], 00:08:23.686 "core_count": 1 00:08:23.686 } 00:08:23.686 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1482956 00:08:23.686 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1482956 ']' 00:08:23.686 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1482956 00:08:23.686 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:23.686 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.686 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1482956 00:08:23.944 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:23.944 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:23.944 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1482956' 00:08:23.944 killing process with pid 1482956 00:08:23.944 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1482956 00:08:23.944 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.944 00:08:23.944 Latency(us) 00:08:23.944 [2024-12-10T11:16:46.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.944 [2024-12-10T11:16:46.112Z] =================================================================================================================== 00:08:23.944 [2024-12-10T11:16:46.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.944 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1482956 00:08:23.944 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.202 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:24.461 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37e34b12-313c-434f-8536-0168c0673636 00:08:24.461 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.719 [2024-12-10 12:16:46.810860] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37e34b12-313c-434f-8536-0168c0673636 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37e34b12-313c-434f-8536-0168c0673636 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:08:24.719 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37e34b12-313c-434f-8536-0168c0673636 00:08:24.978 request: 00:08:24.978 { 00:08:24.978 "uuid": "37e34b12-313c-434f-8536-0168c0673636", 00:08:24.978 "method": "bdev_lvol_get_lvstores", 00:08:24.978 "req_id": 1 00:08:24.978 } 00:08:24.978 Got JSON-RPC error response 00:08:24.978 response: 00:08:24.978 { 00:08:24.978 "code": -19, 00:08:24.978 "message": "No such device" 00:08:24.978 } 00:08:24.978 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:24.978 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.978 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:24.978 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.978 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.237 aio_bdev 00:08:25.237 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ac059ae2-eec1-443d-ba46-b61f583d5e49 00:08:25.237 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ac059ae2-eec1-443d-ba46-b61f583d5e49 00:08:25.237 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.237 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:25.237 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.237 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.237 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:25.237 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b ac059ae2-eec1-443d-ba46-b61f583d5e49 -t 2000 00:08:25.496 [ 00:08:25.496 { 00:08:25.496 "name": "ac059ae2-eec1-443d-ba46-b61f583d5e49", 00:08:25.496 "aliases": [ 00:08:25.496 "lvs/lvol" 00:08:25.496 ], 00:08:25.496 "product_name": "Logical Volume", 00:08:25.496 "block_size": 4096, 00:08:25.496 "num_blocks": 38912, 00:08:25.496 "uuid": "ac059ae2-eec1-443d-ba46-b61f583d5e49", 00:08:25.496 "assigned_rate_limits": { 00:08:25.496 "rw_ios_per_sec": 0, 00:08:25.496 "rw_mbytes_per_sec": 0, 00:08:25.496 "r_mbytes_per_sec": 0, 00:08:25.496 "w_mbytes_per_sec": 0 00:08:25.496 }, 00:08:25.496 "claimed": false, 00:08:25.496 "zoned": false, 00:08:25.496 "supported_io_types": { 00:08:25.496 "read": true, 00:08:25.496 "write": true, 00:08:25.496 "unmap": true, 00:08:25.496 "flush": false, 00:08:25.496 "reset": true, 00:08:25.496 "nvme_admin": false, 00:08:25.496 "nvme_io": false, 00:08:25.496 "nvme_io_md": false, 00:08:25.496 "write_zeroes": true, 00:08:25.496 "zcopy": false, 00:08:25.496 "get_zone_info": false, 00:08:25.496 "zone_management": false, 00:08:25.496 "zone_append": false, 00:08:25.496 "compare": false, 00:08:25.496 "compare_and_write": false, 00:08:25.496 "abort": false, 00:08:25.496 "seek_hole": true, 00:08:25.496 "seek_data": true, 00:08:25.496 "copy": false, 00:08:25.496 "nvme_iov_md": false 00:08:25.496 }, 00:08:25.496 "driver_specific": { 00:08:25.496 "lvol": { 00:08:25.496 "lvol_store_uuid": "37e34b12-313c-434f-8536-0168c0673636", 00:08:25.496 "base_bdev": "aio_bdev", 00:08:25.496 "thin_provision": false, 00:08:25.496 "num_allocated_clusters": 38, 00:08:25.496 "snapshot": false, 00:08:25.496 "clone": false, 00:08:25.496 "esnap_clone": false 00:08:25.496 } 00:08:25.496 } 00:08:25.496 } 00:08:25.496 ] 00:08:25.496 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:25.496 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37e34b12-313c-434f-8536-0168c0673636 00:08:25.496 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:25.755 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:25.755 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37e34b12-313c-434f-8536-0168c0673636 00:08:25.755 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:26.013 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:26.013 12:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete ac059ae2-eec1-443d-ba46-b61f583d5e49 00:08:26.013 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37e34b12-313c-434f-8536-0168c0673636 00:08:26.272 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:26.530 00:08:26.530 real 0m15.705s 00:08:26.530 user 0m15.280s 00:08:26.530 sys 0m1.463s 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:26.530 ************************************ 00:08:26.530 END TEST lvs_grow_clean 00:08:26.530 ************************************ 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.530 ************************************ 00:08:26.530 START TEST lvs_grow_dirty 00:08:26.530 ************************************ 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:26.530 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.788 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:26.788 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:27.046 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:27.047 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:27.047 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:27.304 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:27.304 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:27.304 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 lvol 150 00:08:27.304 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9bffd599-9604-4489-af2b-83c7a43c6f9b 00:08:27.304 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:27.304 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:27.562 [2024-12-10 12:16:49.625082] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:27.562 [2024-12-10 12:16:49.625135] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:27.562 true 00:08:27.562 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:27.563 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:27.821 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:27.821 12:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:28.079 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9bffd599-9604-4489-af2b-83c7a43c6f9b 00:08:28.079 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:28.337 [2024-12-10 12:16:50.395421] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.337 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.595 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1485768 00:08:28.595 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:28.595 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.596 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1485768 /var/tmp/bdevperf.sock 00:08:28.596 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1485768 ']' 00:08:28.596 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.596 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.596 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.596 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.596 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.596 [2024-12-10 12:16:50.646721] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:28.596 [2024-12-10 12:16:50.646768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1485768 ] 00:08:28.596 [2024-12-10 12:16:50.720820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.596 [2024-12-10 12:16:50.760406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.854 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.854 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:28.854 12:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:29.112 Nvme0n1 00:08:29.370 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:29.370 [ 00:08:29.370 { 00:08:29.370 "name": "Nvme0n1", 00:08:29.370 "aliases": [ 00:08:29.370 "9bffd599-9604-4489-af2b-83c7a43c6f9b" 00:08:29.370 ], 00:08:29.370 "product_name": "NVMe disk", 00:08:29.370 "block_size": 4096, 00:08:29.370 "num_blocks": 38912, 00:08:29.370 "uuid": "9bffd599-9604-4489-af2b-83c7a43c6f9b", 00:08:29.370 "numa_id": 1, 00:08:29.370 "assigned_rate_limits": { 00:08:29.370 "rw_ios_per_sec": 0, 00:08:29.370 "rw_mbytes_per_sec": 0, 00:08:29.370 "r_mbytes_per_sec": 0, 00:08:29.370 "w_mbytes_per_sec": 0 00:08:29.370 }, 00:08:29.370 "claimed": false, 00:08:29.370 "zoned": false, 00:08:29.370 "supported_io_types": { 00:08:29.370 "read": true, 00:08:29.370 "write": true, 00:08:29.370 "unmap": true, 00:08:29.370 "flush": true, 00:08:29.370 "reset": true, 00:08:29.370 "nvme_admin": true, 00:08:29.370 "nvme_io": true, 00:08:29.370 "nvme_io_md": false, 00:08:29.370 "write_zeroes": true, 00:08:29.370 "zcopy": false, 00:08:29.370 "get_zone_info": false, 00:08:29.370 "zone_management": false, 00:08:29.370 "zone_append": false, 00:08:29.370 "compare": true, 00:08:29.370 "compare_and_write": true, 00:08:29.370 "abort": true, 00:08:29.370 "seek_hole": false, 00:08:29.370 "seek_data": false, 00:08:29.370 "copy": true, 00:08:29.370 "nvme_iov_md": false 00:08:29.370 }, 00:08:29.370 "memory_domains": [ 00:08:29.370 { 00:08:29.370 "dma_device_id": "system", 00:08:29.370 "dma_device_type": 1 00:08:29.370 } 00:08:29.370 ], 00:08:29.370 "driver_specific": { 00:08:29.370 "nvme": [ 00:08:29.370 { 00:08:29.370 "trid": { 00:08:29.370 "trtype": "TCP", 00:08:29.370 "adrfam": "IPv4", 00:08:29.370 "traddr": "10.0.0.2", 00:08:29.370 "trsvcid": "4420", 00:08:29.370 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:29.370 }, 00:08:29.370 "ctrlr_data": { 00:08:29.370 "cntlid": 1, 00:08:29.370 "vendor_id": "0x8086", 00:08:29.370 "model_number": "SPDK bdev Controller", 00:08:29.370 "serial_number": "SPDK0", 00:08:29.370 "firmware_revision": "25.01", 00:08:29.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:29.370 "oacs": { 00:08:29.370 "security": 0, 00:08:29.370 "format": 0, 00:08:29.370 "firmware": 0, 00:08:29.370 "ns_manage": 0 00:08:29.370 }, 00:08:29.370 "multi_ctrlr": true, 00:08:29.370 "ana_reporting": false 00:08:29.370 }, 00:08:29.370 "vs": { 00:08:29.370 "nvme_version": "1.3" 00:08:29.370 }, 00:08:29.370 "ns_data": { 00:08:29.370 "id": 1, 00:08:29.370 "can_share": true 00:08:29.370 } 00:08:29.370 } 00:08:29.370 ], 00:08:29.370 "mp_policy": "active_passive" 00:08:29.370 } 00:08:29.370 } 00:08:29.370 ] 00:08:29.370 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1485827 00:08:29.370 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:29.370 12:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.628 Running I/O for 10 seconds... 00:08:30.563 Latency(us) 00:08:30.563 [2024-12-10T11:16:52.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.563 Nvme0n1 : 1.00 22864.00 89.31 0.00 0.00 0.00 0.00 0.00 00:08:30.563 [2024-12-10T11:16:52.731Z] =================================================================================================================== 00:08:30.563 [2024-12-10T11:16:52.731Z] Total : 22864.00 89.31 0.00 0.00 0.00 0.00 0.00 00:08:30.563 00:08:31.497 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:31.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.497 Nvme0n1 : 2.00 23060.50 90.08 0.00 0.00 0.00 0.00 0.00 00:08:31.497 [2024-12-10T11:16:53.665Z] =================================================================================================================== 00:08:31.497 [2024-12-10T11:16:53.665Z] Total : 23060.50 90.08 0.00 0.00 0.00 0.00 0.00 00:08:31.497 00:08:31.756 true 00:08:31.756 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:31.756 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:32.013 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:32.014 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:32.014 12:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1485827 00:08:32.579 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.579 Nvme0n1 : 3.00 23102.33 90.24 0.00 0.00 0.00 0.00 0.00 00:08:32.579 [2024-12-10T11:16:54.747Z] =================================================================================================================== 00:08:32.579 [2024-12-10T11:16:54.747Z] Total : 23102.33 90.24 0.00 0.00 0.00 0.00 0.00 00:08:32.579 00:08:33.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.510 Nvme0n1 : 4.00 23160.50 90.47 0.00 0.00 0.00 0.00 0.00 00:08:33.510 [2024-12-10T11:16:55.678Z] =================================================================================================================== 00:08:33.510 [2024-12-10T11:16:55.678Z] Total : 23160.50 90.47 0.00 0.00 0.00 0.00 0.00 00:08:33.510 00:08:34.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.448 Nvme0n1 : 5.00 23192.60 90.60 0.00 0.00 0.00 0.00 0.00 00:08:34.448 [2024-12-10T11:16:56.616Z] =================================================================================================================== 00:08:34.448 [2024-12-10T11:16:56.616Z] Total : 23192.60 90.60 0.00 0.00 0.00 0.00 0.00 00:08:34.448 00:08:35.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.823 Nvme0n1 : 6.00 23216.67 90.69 0.00 0.00 0.00 0.00 0.00 00:08:35.823 [2024-12-10T11:16:57.991Z] =================================================================================================================== 00:08:35.823 [2024-12-10T11:16:57.991Z] Total : 23216.67 90.69 0.00 0.00 0.00 0.00 0.00 00:08:35.823 00:08:36.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.758 Nvme0n1 : 7.00 23234.29 90.76 0.00 0.00 0.00 0.00 0.00 00:08:36.758 [2024-12-10T11:16:58.926Z] =================================================================================================================== 00:08:36.758 [2024-12-10T11:16:58.926Z] Total : 23234.29 90.76 0.00 0.00 0.00 0.00 0.00 00:08:36.758 00:08:37.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.738 Nvme0n1 : 8.00 23249.75 90.82 0.00 0.00 0.00 0.00 0.00 00:08:37.738 [2024-12-10T11:16:59.906Z] =================================================================================================================== 00:08:37.738 [2024-12-10T11:16:59.906Z] Total : 23249.75 90.82 0.00 0.00 0.00 0.00 0.00 00:08:37.738 00:08:38.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.702 Nvme0n1 : 9.00 23230.33 90.74 0.00 0.00 0.00 0.00 0.00 00:08:38.702 [2024-12-10T11:17:00.870Z] =================================================================================================================== 00:08:38.702 [2024-12-10T11:17:00.870Z] Total : 23230.33 90.74 0.00 0.00 0.00 0.00 0.00 00:08:38.702 00:08:39.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.638 Nvme0n1 : 10.00 23241.40 90.79 0.00 0.00 0.00 0.00 0.00 00:08:39.638 [2024-12-10T11:17:01.806Z] =================================================================================================================== 00:08:39.638 [2024-12-10T11:17:01.806Z] Total : 23241.40 90.79 0.00 0.00 0.00 0.00 0.00 00:08:39.638 00:08:39.638 00:08:39.638 Latency(us) 00:08:39.638 [2024-12-10T11:17:01.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.638 Nvme0n1 : 10.01 23241.90 90.79 0.00 0.00 5504.25 3234.06 14189.97 00:08:39.638 [2024-12-10T11:17:01.806Z] =================================================================================================================== 00:08:39.638 [2024-12-10T11:17:01.806Z] Total : 23241.90 90.79 0.00 0.00 5504.25 3234.06 14189.97 00:08:39.638 { 00:08:39.638 "results": [ 00:08:39.638 { 00:08:39.638 "job": "Nvme0n1", 00:08:39.638 "core_mask": "0x2", 00:08:39.638 "workload": "randwrite", 00:08:39.638 "status": "finished", 00:08:39.638 "queue_depth": 128, 00:08:39.638 "io_size": 4096, 00:08:39.638 "runtime": 10.005294, 00:08:39.638 "iops": 23241.895740395033, 00:08:39.638 "mibps": 90.7886552359181, 00:08:39.638 "io_failed": 0, 00:08:39.638 "io_timeout": 0, 00:08:39.638 "avg_latency_us": 5504.245748160314, 00:08:39.638 "min_latency_us": 3234.0591304347827, 00:08:39.638 "max_latency_us": 14189.968695652175 00:08:39.638 } 00:08:39.638 ], 00:08:39.638 "core_count": 1 00:08:39.638 } 00:08:39.638 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1485768 00:08:39.638 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1485768 ']' 00:08:39.638 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1485768 00:08:39.638 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:39.638 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.638 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1485768 00:08:39.638 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.638 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.638 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1485768' 00:08:39.638 killing process with pid 1485768 00:08:39.638 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1485768 00:08:39.638 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.638 00:08:39.638 Latency(us) 00:08:39.638 [2024-12-10T11:17:01.807Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.639 [2024-12-10T11:17:01.807Z] =================================================================================================================== 00:08:39.639 [2024-12-10T11:17:01.807Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.639 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1485768 00:08:39.897 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.897 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.155 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:40.155 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1482618 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1482618 00:08:40.414 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1482618 Killed "${NVMF_APP[@]}" "$@" 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1487648 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1487648 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1487648 ']' 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.414 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.414 [2024-12-10 12:17:02.559005] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:40.414 [2024-12-10 12:17:02.559054] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.673 [2024-12-10 12:17:02.639017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.673 [2024-12-10 12:17:02.678736] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.673 [2024-12-10 12:17:02.678773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.673 [2024-12-10 12:17:02.678780] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.673 [2024-12-10 12:17:02.678787] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.673 [2024-12-10 12:17:02.678792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.673 [2024-12-10 12:17:02.679328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.673 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.673 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:40.673 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.673 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.673 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.673 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.673 12:17:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.931 [2024-12-10 12:17:02.990440] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:40.931 [2024-12-10 12:17:02.990540] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:40.931 [2024-12-10 12:17:02.990566] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:40.931 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:40.931 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9bffd599-9604-4489-af2b-83c7a43c6f9b 00:08:40.931 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9bffd599-9604-4489-af2b-83c7a43c6f9b 00:08:40.931 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.931 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:40.931 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.931 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.931 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:41.189 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 9bffd599-9604-4489-af2b-83c7a43c6f9b -t 2000 00:08:41.447 [ 00:08:41.447 { 00:08:41.448 "name": "9bffd599-9604-4489-af2b-83c7a43c6f9b", 00:08:41.448 "aliases": [ 00:08:41.448 "lvs/lvol" 00:08:41.448 ], 00:08:41.448 "product_name": "Logical Volume", 00:08:41.448 "block_size": 4096, 00:08:41.448 "num_blocks": 38912, 00:08:41.448 "uuid": "9bffd599-9604-4489-af2b-83c7a43c6f9b", 00:08:41.448 "assigned_rate_limits": { 00:08:41.448 "rw_ios_per_sec": 0, 00:08:41.448 "rw_mbytes_per_sec": 0, 00:08:41.448 "r_mbytes_per_sec": 0, 00:08:41.448 "w_mbytes_per_sec": 0 00:08:41.448 }, 00:08:41.448 "claimed": false, 00:08:41.448 "zoned": false, 00:08:41.448 "supported_io_types": { 00:08:41.448 "read": true, 00:08:41.448 "write": true, 00:08:41.448 "unmap": true, 00:08:41.448 "flush": false, 00:08:41.448 "reset": true, 00:08:41.448 "nvme_admin": false, 00:08:41.448 "nvme_io": false, 00:08:41.448 "nvme_io_md": false, 00:08:41.448 "write_zeroes": true, 00:08:41.448 "zcopy": false, 00:08:41.448 "get_zone_info": false, 00:08:41.448 "zone_management": false, 00:08:41.448 "zone_append": false, 00:08:41.448 "compare": false, 00:08:41.448 "compare_and_write": false, 00:08:41.448 "abort": false, 00:08:41.448 "seek_hole": true, 00:08:41.448 "seek_data": true, 00:08:41.448 "copy": false, 00:08:41.448 "nvme_iov_md": false 00:08:41.448 }, 00:08:41.448 "driver_specific": { 00:08:41.448 "lvol": { 00:08:41.448 "lvol_store_uuid": "6dc15ad8-4591-4342-8131-8bba671bc1b0", 00:08:41.448 "base_bdev": "aio_bdev", 00:08:41.448 "thin_provision": false, 00:08:41.448 "num_allocated_clusters": 38, 00:08:41.448 "snapshot": false, 00:08:41.448 "clone": false, 00:08:41.448 "esnap_clone": false 00:08:41.448 } 00:08:41.448 } 00:08:41.448 } 00:08:41.448 ] 00:08:41.448 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:41.448 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:41.448 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:41.448 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:41.448 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:41.448 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:41.706 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:41.706 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.964 [2024-12-10 12:17:03.923494] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:41.964 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:41.964 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:41.964 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:41.964 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:41.964 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.964 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:41.964 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.964 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:41.964 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.965 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:41.965 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:08:41.965 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:42.223 request: 00:08:42.223 { 00:08:42.223 "uuid": "6dc15ad8-4591-4342-8131-8bba671bc1b0", 00:08:42.223 "method": "bdev_lvol_get_lvstores", 00:08:42.223 "req_id": 1 00:08:42.223 } 00:08:42.223 Got JSON-RPC error response 00:08:42.223 response: 00:08:42.223 { 00:08:42.223 "code": -19, 00:08:42.223 "message": "No such device" 00:08:42.223 } 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.223 aio_bdev 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9bffd599-9604-4489-af2b-83c7a43c6f9b 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9bffd599-9604-4489-af2b-83c7a43c6f9b 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.223 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.482 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 9bffd599-9604-4489-af2b-83c7a43c6f9b -t 2000 00:08:42.741 [ 00:08:42.741 { 00:08:42.741 "name": "9bffd599-9604-4489-af2b-83c7a43c6f9b", 00:08:42.741 "aliases": [ 00:08:42.741 "lvs/lvol" 00:08:42.741 ], 00:08:42.741 "product_name": "Logical Volume", 00:08:42.741 "block_size": 4096, 00:08:42.741 "num_blocks": 38912, 00:08:42.741 "uuid": "9bffd599-9604-4489-af2b-83c7a43c6f9b", 00:08:42.741 "assigned_rate_limits": { 00:08:42.741 "rw_ios_per_sec": 0, 00:08:42.741 "rw_mbytes_per_sec": 0, 00:08:42.741 "r_mbytes_per_sec": 0, 00:08:42.741 "w_mbytes_per_sec": 0 00:08:42.741 }, 00:08:42.741 "claimed": false, 00:08:42.741 "zoned": false, 00:08:42.741 "supported_io_types": { 00:08:42.741 "read": true, 00:08:42.741 "write": true, 00:08:42.741 "unmap": true, 00:08:42.741 "flush": false, 00:08:42.741 "reset": true, 00:08:42.741 "nvme_admin": false, 00:08:42.741 "nvme_io": false, 00:08:42.741 "nvme_io_md": false, 00:08:42.741 "write_zeroes": true, 00:08:42.741 "zcopy": false, 00:08:42.741 "get_zone_info": false, 00:08:42.741 "zone_management": false, 00:08:42.741 "zone_append": false, 00:08:42.741 "compare": false, 00:08:42.741 "compare_and_write": false, 00:08:42.741 "abort": false, 00:08:42.741 "seek_hole": true, 00:08:42.741 "seek_data": true, 00:08:42.741 "copy": false, 00:08:42.741 "nvme_iov_md": false 00:08:42.741 }, 00:08:42.741 "driver_specific": { 00:08:42.741 "lvol": { 00:08:42.741 "lvol_store_uuid": "6dc15ad8-4591-4342-8131-8bba671bc1b0", 00:08:42.741 "base_bdev": "aio_bdev", 00:08:42.741 "thin_provision": false, 00:08:42.741 "num_allocated_clusters": 38, 00:08:42.741 "snapshot": false, 00:08:42.741 "clone": false, 00:08:42.741 "esnap_clone": false 00:08:42.741 } 00:08:42.741 } 00:08:42.741 } 00:08:42.741 ] 00:08:42.741 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:42.741 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:42.741 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:43.001 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:43.001 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:43.001 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:43.001 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:43.001 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 9bffd599-9604-4489-af2b-83c7a43c6f9b 00:08:43.266 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6dc15ad8-4591-4342-8131-8bba671bc1b0 00:08:43.525 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:43.784 00:08:43.784 real 0m17.092s 00:08:43.784 user 0m44.278s 00:08:43.784 sys 0m3.714s 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.784 ************************************ 00:08:43.784 END TEST lvs_grow_dirty 00:08:43.784 ************************************ 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:43.784 nvmf_trace.0 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.784 rmmod nvme_tcp 00:08:43.784 rmmod nvme_fabrics 00:08:43.784 rmmod nvme_keyring 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1487648 ']' 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1487648 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1487648 ']' 00:08:43.784 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1487648 00:08:43.785 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:43.785 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.785 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1487648 00:08:43.785 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.785 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.785 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1487648' 00:08:43.785 killing process with pid 1487648 00:08:43.785 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1487648 00:08:43.785 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1487648 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.044 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.580 00:08:46.580 real 0m42.130s 00:08:46.580 user 1m5.135s 00:08:46.580 sys 0m10.196s 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:46.580 ************************************ 00:08:46.580 END TEST nvmf_lvs_grow 00:08:46.580 ************************************ 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.580 ************************************ 00:08:46.580 START TEST nvmf_bdev_io_wait 00:08:46.580 ************************************ 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:46.580 * Looking for test storage... 00:08:46.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.580 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:46.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.581 --rc genhtml_branch_coverage=1 00:08:46.581 --rc genhtml_function_coverage=1 00:08:46.581 --rc genhtml_legend=1 00:08:46.581 --rc geninfo_all_blocks=1 00:08:46.581 --rc geninfo_unexecuted_blocks=1 00:08:46.581 00:08:46.581 ' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:46.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.581 --rc genhtml_branch_coverage=1 00:08:46.581 --rc genhtml_function_coverage=1 00:08:46.581 --rc genhtml_legend=1 00:08:46.581 --rc geninfo_all_blocks=1 00:08:46.581 --rc geninfo_unexecuted_blocks=1 00:08:46.581 00:08:46.581 ' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:46.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.581 --rc genhtml_branch_coverage=1 00:08:46.581 --rc genhtml_function_coverage=1 00:08:46.581 --rc genhtml_legend=1 00:08:46.581 --rc geninfo_all_blocks=1 00:08:46.581 --rc geninfo_unexecuted_blocks=1 00:08:46.581 00:08:46.581 ' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:46.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.581 --rc genhtml_branch_coverage=1 00:08:46.581 --rc genhtml_function_coverage=1 00:08:46.581 --rc genhtml_legend=1 00:08:46.581 --rc geninfo_all_blocks=1 00:08:46.581 --rc geninfo_unexecuted_blocks=1 00:08:46.581 00:08:46.581 ' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.581 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.582 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.582 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.582 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.582 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:53.151 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.151 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:53.152 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:53.152 Found net devices under 0000:86:00.0: cvl_0_0 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:53.152 Found net devices under 0000:86:00.1: cvl_0_1 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:08:53.152 00:08:53.152 --- 10.0.0.2 ping statistics --- 00:08:53.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.152 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:08:53.152 00:08:53.152 --- 10.0.0.1 ping statistics --- 00:08:53.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.152 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1491914 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1491914 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1491914 ']' 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.152 [2024-12-10 12:17:14.472365] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:53.152 [2024-12-10 12:17:14.472414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.152 [2024-12-10 12:17:14.554848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.152 [2024-12-10 12:17:14.597732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.152 [2024-12-10 12:17:14.597769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.152 [2024-12-10 12:17:14.597776] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.152 [2024-12-10 12:17:14.597782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.152 [2024-12-10 12:17:14.597787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.152 [2024-12-10 12:17:14.599382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.152 [2024-12-10 12:17:14.599501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.152 [2024-12-10 12:17:14.599612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.152 [2024-12-10 12:17:14.599612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.152 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.153 [2024-12-10 12:17:14.732064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.153 Malloc0 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:53.153 [2024-12-10 12:17:14.775459] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1491936 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1491938 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.153 { 00:08:53.153 "params": { 00:08:53.153 "name": "Nvme$subsystem", 00:08:53.153 "trtype": "$TEST_TRANSPORT", 00:08:53.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.153 "adrfam": "ipv4", 00:08:53.153 "trsvcid": "$NVMF_PORT", 00:08:53.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.153 "hdgst": ${hdgst:-false}, 00:08:53.153 "ddgst": ${ddgst:-false} 00:08:53.153 }, 00:08:53.153 "method": "bdev_nvme_attach_controller" 00:08:53.153 } 00:08:53.153 EOF 00:08:53.153 )") 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1491940 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.153 { 00:08:53.153 "params": { 00:08:53.153 "name": "Nvme$subsystem", 00:08:53.153 "trtype": "$TEST_TRANSPORT", 00:08:53.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.153 "adrfam": "ipv4", 00:08:53.153 "trsvcid": "$NVMF_PORT", 00:08:53.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.153 "hdgst": ${hdgst:-false}, 00:08:53.153 "ddgst": ${ddgst:-false} 00:08:53.153 }, 00:08:53.153 "method": "bdev_nvme_attach_controller" 00:08:53.153 } 00:08:53.153 EOF 00:08:53.153 )") 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1491943 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.153 { 00:08:53.153 "params": { 00:08:53.153 "name": "Nvme$subsystem", 00:08:53.153 "trtype": "$TEST_TRANSPORT", 00:08:53.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.153 "adrfam": "ipv4", 00:08:53.153 "trsvcid": "$NVMF_PORT", 00:08:53.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.153 "hdgst": ${hdgst:-false}, 00:08:53.153 "ddgst": ${ddgst:-false} 00:08:53.153 }, 00:08:53.153 "method": "bdev_nvme_attach_controller" 00:08:53.153 } 00:08:53.153 EOF 00:08:53.153 )") 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:53.153 { 00:08:53.153 "params": { 00:08:53.153 "name": "Nvme$subsystem", 00:08:53.153 "trtype": "$TEST_TRANSPORT", 00:08:53.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:53.153 "adrfam": "ipv4", 00:08:53.153 "trsvcid": "$NVMF_PORT", 00:08:53.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:53.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:53.153 "hdgst": ${hdgst:-false}, 00:08:53.153 "ddgst": ${ddgst:-false} 00:08:53.153 }, 00:08:53.153 "method": "bdev_nvme_attach_controller" 00:08:53.153 } 00:08:53.153 EOF 00:08:53.153 )") 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1491936 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.153 "params": { 00:08:53.153 "name": "Nvme1", 00:08:53.153 "trtype": "tcp", 00:08:53.153 "traddr": "10.0.0.2", 00:08:53.153 "adrfam": "ipv4", 00:08:53.153 "trsvcid": "4420", 00:08:53.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.153 "hdgst": false, 00:08:53.153 "ddgst": false 00:08:53.153 }, 00:08:53.153 "method": "bdev_nvme_attach_controller" 00:08:53.153 }' 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:53.153 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.153 "params": { 00:08:53.153 "name": "Nvme1", 00:08:53.153 "trtype": "tcp", 00:08:53.153 "traddr": "10.0.0.2", 00:08:53.153 "adrfam": "ipv4", 00:08:53.153 "trsvcid": "4420", 00:08:53.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.153 "hdgst": false, 00:08:53.153 "ddgst": false 00:08:53.153 }, 00:08:53.153 "method": "bdev_nvme_attach_controller" 00:08:53.153 }' 00:08:53.154 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:53.154 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.154 "params": { 00:08:53.154 "name": "Nvme1", 00:08:53.154 "trtype": "tcp", 00:08:53.154 "traddr": "10.0.0.2", 00:08:53.154 "adrfam": "ipv4", 00:08:53.154 "trsvcid": "4420", 00:08:53.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.154 "hdgst": false, 00:08:53.154 "ddgst": false 00:08:53.154 }, 00:08:53.154 "method": "bdev_nvme_attach_controller" 00:08:53.154 }' 00:08:53.154 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:53.154 12:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:53.154 "params": { 00:08:53.154 "name": "Nvme1", 00:08:53.154 "trtype": "tcp", 00:08:53.154 "traddr": "10.0.0.2", 00:08:53.154 "adrfam": "ipv4", 00:08:53.154 "trsvcid": "4420", 00:08:53.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:53.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:53.154 "hdgst": false, 00:08:53.154 "ddgst": false 00:08:53.154 }, 00:08:53.154 "method": "bdev_nvme_attach_controller" 00:08:53.154 }' 00:08:53.154 [2024-12-10 12:17:14.828155] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:53.154 [2024-12-10 12:17:14.828208] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:53.154 [2024-12-10 12:17:14.829040] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:53.154 [2024-12-10 12:17:14.829084] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:53.154 [2024-12-10 12:17:14.830375] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:53.154 [2024-12-10 12:17:14.830382] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:08:53.154 [2024-12-10 12:17:14.830417] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 12:17:14.830418] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:53.154 --proc-type=auto ] 00:08:53.154 [2024-12-10 12:17:15.020868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.154 [2024-12-10 12:17:15.062898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:53.154 [2024-12-10 12:17:15.113967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.154 [2024-12-10 12:17:15.154805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:53.154 [2024-12-10 12:17:15.207449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.154 [2024-12-10 12:17:15.249391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:53.154 [2024-12-10 12:17:15.308271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.411 [2024-12-10 12:17:15.359400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:53.411 Running I/O for 1 seconds... 00:08:53.411 Running I/O for 1 seconds... 00:08:53.411 Running I/O for 1 seconds... 00:08:53.668 Running I/O for 1 seconds... 00:08:54.599 7684.00 IOPS, 30.02 MiB/s 00:08:54.599 Latency(us) 00:08:54.599 [2024-12-10T11:17:16.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.599 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:54.599 Nvme1n1 : 1.02 7683.95 30.02 0.00 0.00 16545.32 6525.11 28721.86 00:08:54.599 [2024-12-10T11:17:16.767Z] =================================================================================================================== 00:08:54.599 [2024-12-10T11:17:16.767Z] Total : 7683.95 30.02 0.00 0.00 16545.32 6525.11 28721.86 00:08:54.599 12185.00 IOPS, 47.60 MiB/s 00:08:54.599 Latency(us) 00:08:54.599 [2024-12-10T11:17:16.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.599 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:54.599 Nvme1n1 : 1.01 12241.17 47.82 0.00 0.00 10423.86 4986.43 20743.57 00:08:54.599 [2024-12-10T11:17:16.767Z] =================================================================================================================== 00:08:54.599 [2024-12-10T11:17:16.768Z] Total : 12241.17 47.82 0.00 0.00 10423.86 4986.43 20743.57 00:08:54.600 7431.00 IOPS, 29.03 MiB/s 00:08:54.600 Latency(us) 00:08:54.600 [2024-12-10T11:17:16.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.600 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:54.600 Nvme1n1 : 1.00 7543.01 29.46 0.00 0.00 16933.66 2892.13 39891.48 00:08:54.600 [2024-12-10T11:17:16.768Z] =================================================================================================================== 00:08:54.600 [2024-12-10T11:17:16.768Z] Total : 7543.01 29.46 0.00 0.00 16933.66 2892.13 39891.48 00:08:54.600 234704.00 IOPS, 916.81 MiB/s 00:08:54.600 Latency(us) 00:08:54.600 [2024-12-10T11:17:16.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.600 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:54.600 Nvme1n1 : 1.00 234327.70 915.34 0.00 0.00 543.49 231.51 1581.41 00:08:54.600 [2024-12-10T11:17:16.768Z] =================================================================================================================== 00:08:54.600 [2024-12-10T11:17:16.768Z] Total : 234327.70 915.34 0.00 0.00 543.49 231.51 1581.41 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1491938 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1491940 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1491943 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.600 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.600 rmmod nvme_tcp 00:08:54.858 rmmod nvme_fabrics 00:08:54.858 rmmod nvme_keyring 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1491914 ']' 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1491914 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1491914 ']' 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1491914 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1491914 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1491914' 00:08:54.858 killing process with pid 1491914 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1491914 00:08:54.858 12:17:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1491914 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.859 12:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:57.397 00:08:57.397 real 0m10.852s 00:08:57.397 user 0m16.662s 00:08:57.397 sys 0m6.174s 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.397 ************************************ 00:08:57.397 END TEST nvmf_bdev_io_wait 00:08:57.397 ************************************ 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.397 ************************************ 00:08:57.397 START TEST nvmf_queue_depth 00:08:57.397 ************************************ 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:57.397 * Looking for test storage... 00:08:57.397 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.397 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:57.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.397 --rc genhtml_branch_coverage=1 00:08:57.398 --rc genhtml_function_coverage=1 00:08:57.398 --rc genhtml_legend=1 00:08:57.398 --rc geninfo_all_blocks=1 00:08:57.398 --rc geninfo_unexecuted_blocks=1 00:08:57.398 00:08:57.398 ' 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:57.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.398 --rc genhtml_branch_coverage=1 00:08:57.398 --rc genhtml_function_coverage=1 00:08:57.398 --rc genhtml_legend=1 00:08:57.398 --rc geninfo_all_blocks=1 00:08:57.398 --rc geninfo_unexecuted_blocks=1 00:08:57.398 00:08:57.398 ' 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:57.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.398 --rc genhtml_branch_coverage=1 00:08:57.398 --rc genhtml_function_coverage=1 00:08:57.398 --rc genhtml_legend=1 00:08:57.398 --rc geninfo_all_blocks=1 00:08:57.398 --rc geninfo_unexecuted_blocks=1 00:08:57.398 00:08:57.398 ' 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:57.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.398 --rc genhtml_branch_coverage=1 00:08:57.398 --rc genhtml_function_coverage=1 00:08:57.398 --rc genhtml_legend=1 00:08:57.398 --rc geninfo_all_blocks=1 00:08:57.398 --rc geninfo_unexecuted_blocks=1 00:08:57.398 00:08:57.398 ' 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:57.398 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:03.971 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:03.971 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.971 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:03.972 Found net devices under 0000:86:00.0: cvl_0_0 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:03.972 Found net devices under 0000:86:00.1: cvl_0_1 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:09:03.972 00:09:03.972 --- 10.0.0.2 ping statistics --- 00:09:03.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.972 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:09:03.972 00:09:03.972 --- 10.0.0.1 ping statistics --- 00:09:03.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.972 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1495954 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1495954 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1495954 ']' 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.972 [2024-12-10 12:17:25.432814] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:03.972 [2024-12-10 12:17:25.432864] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.972 [2024-12-10 12:17:25.497429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.972 [2024-12-10 12:17:25.538146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.972 [2024-12-10 12:17:25.538195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.972 [2024-12-10 12:17:25.538203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.972 [2024-12-10 12:17:25.538209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.972 [2024-12-10 12:17:25.538214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.972 [2024-12-10 12:17:25.538726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.972 [2024-12-10 12:17:25.673655] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.972 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.973 Malloc0 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.973 [2024-12-10 12:17:25.723734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1495978 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1495978 /var/tmp/bdevperf.sock 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1495978 ']' 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.973 [2024-12-10 12:17:25.773315] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:03.973 [2024-12-10 12:17:25.773355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1495978 ] 00:09:03.973 [2024-12-10 12:17:25.847249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.973 [2024-12-10 12:17:25.887369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.973 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.973 NVMe0n1 00:09:03.973 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.973 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.231 Running I/O for 10 seconds... 00:09:06.099 11604.00 IOPS, 45.33 MiB/s [2024-12-10T11:17:29.640Z] 11847.00 IOPS, 46.28 MiB/s [2024-12-10T11:17:30.574Z] 11989.00 IOPS, 46.83 MiB/s [2024-12-10T11:17:31.509Z] 12051.00 IOPS, 47.07 MiB/s [2024-12-10T11:17:32.442Z] 12088.20 IOPS, 47.22 MiB/s [2024-12-10T11:17:33.375Z] 12153.33 IOPS, 47.47 MiB/s [2024-12-10T11:17:34.308Z] 12188.00 IOPS, 47.61 MiB/s [2024-12-10T11:17:35.247Z] 12177.50 IOPS, 47.57 MiB/s [2024-12-10T11:17:36.624Z] 12201.33 IOPS, 47.66 MiB/s [2024-12-10T11:17:36.624Z] 12229.50 IOPS, 47.77 MiB/s 00:09:14.456 Latency(us) 00:09:14.456 [2024-12-10T11:17:36.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.456 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:14.456 Verification LBA range: start 0x0 length 0x4000 00:09:14.456 NVMe0n1 : 10.06 12244.83 47.83 0.00 0.00 83316.22 17894.18 54936.26 00:09:14.456 [2024-12-10T11:17:36.624Z] =================================================================================================================== 00:09:14.456 [2024-12-10T11:17:36.624Z] Total : 12244.83 47.83 0.00 0.00 83316.22 17894.18 54936.26 00:09:14.456 { 00:09:14.456 "results": [ 00:09:14.456 { 00:09:14.456 "job": "NVMe0n1", 00:09:14.456 "core_mask": "0x1", 00:09:14.456 "workload": "verify", 00:09:14.456 "status": "finished", 00:09:14.456 "verify_range": { 00:09:14.456 "start": 0, 00:09:14.456 "length": 16384 00:09:14.456 }, 00:09:14.456 "queue_depth": 1024, 00:09:14.456 "io_size": 4096, 00:09:14.456 "runtime": 10.061223, 00:09:14.456 "iops": 12244.833456131526, 00:09:14.456 "mibps": 47.83138068801377, 00:09:14.457 "io_failed": 0, 00:09:14.457 "io_timeout": 0, 00:09:14.457 "avg_latency_us": 83316.21700397451, 00:09:14.457 "min_latency_us": 17894.177391304347, 00:09:14.457 "max_latency_us": 54936.26434782609 00:09:14.457 } 00:09:14.457 ], 00:09:14.457 "core_count": 1 00:09:14.457 } 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1495978 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1495978 ']' 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1495978 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1495978 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1495978' 00:09:14.457 killing process with pid 1495978 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1495978 00:09:14.457 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.457 00:09:14.457 Latency(us) 00:09:14.457 [2024-12-10T11:17:36.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.457 [2024-12-10T11:17:36.625Z] =================================================================================================================== 00:09:14.457 [2024-12-10T11:17:36.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1495978 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.457 rmmod nvme_tcp 00:09:14.457 rmmod nvme_fabrics 00:09:14.457 rmmod nvme_keyring 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1495954 ']' 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1495954 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1495954 ']' 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1495954 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.457 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1495954 00:09:14.716 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:14.716 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:14.716 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1495954' 00:09:14.716 killing process with pid 1495954 00:09:14.716 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1495954 00:09:14.716 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1495954 00:09:14.716 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.717 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.323 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:17.323 00:09:17.323 real 0m19.730s 00:09:17.323 user 0m23.165s 00:09:17.323 sys 0m5.976s 00:09:17.323 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.323 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.323 ************************************ 00:09:17.323 END TEST nvmf_queue_depth 00:09:17.323 ************************************ 00:09:17.323 12:17:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:17.323 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:17.323 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.323 12:17:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:17.323 ************************************ 00:09:17.323 START TEST nvmf_target_multipath 00:09:17.323 ************************************ 00:09:17.323 12:17:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:17.323 * Looking for test storage... 00:09:17.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:17.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.323 --rc genhtml_branch_coverage=1 00:09:17.323 --rc genhtml_function_coverage=1 00:09:17.323 --rc genhtml_legend=1 00:09:17.323 --rc geninfo_all_blocks=1 00:09:17.323 --rc geninfo_unexecuted_blocks=1 00:09:17.323 00:09:17.323 ' 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:17.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.323 --rc genhtml_branch_coverage=1 00:09:17.323 --rc genhtml_function_coverage=1 00:09:17.323 --rc genhtml_legend=1 00:09:17.323 --rc geninfo_all_blocks=1 00:09:17.323 --rc geninfo_unexecuted_blocks=1 00:09:17.323 00:09:17.323 ' 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:17.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.323 --rc genhtml_branch_coverage=1 00:09:17.323 --rc genhtml_function_coverage=1 00:09:17.323 --rc genhtml_legend=1 00:09:17.323 --rc geninfo_all_blocks=1 00:09:17.323 --rc geninfo_unexecuted_blocks=1 00:09:17.323 00:09:17.323 ' 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:17.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:17.323 --rc genhtml_branch_coverage=1 00:09:17.323 --rc genhtml_function_coverage=1 00:09:17.323 --rc genhtml_legend=1 00:09:17.323 --rc geninfo_all_blocks=1 00:09:17.323 --rc geninfo_unexecuted_blocks=1 00:09:17.323 00:09:17.323 ' 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:17.323 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:17.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:17.324 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.915 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:23.916 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:23.916 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:23.916 Found net devices under 0000:86:00.0: cvl_0_0 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:23.916 Found net devices under 0000:86:00.1: cvl_0_1 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.916 12:17:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:09:23.916 00:09:23.916 --- 10.0.0.2 ping statistics --- 00:09:23.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.916 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:09:23.916 00:09:23.916 --- 10.0.0.1 ping statistics --- 00:09:23.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.916 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:23.916 only one NIC for nvmf test 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.916 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.916 rmmod nvme_tcp 00:09:23.916 rmmod nvme_fabrics 00:09:23.916 rmmod nvme_keyring 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.917 12:17:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.297 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.556 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.556 00:09:25.556 real 0m8.512s 00:09:25.556 user 0m1.900s 00:09:25.556 sys 0m4.525s 00:09:25.556 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.556 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:25.556 ************************************ 00:09:25.556 END TEST nvmf_target_multipath 00:09:25.556 ************************************ 00:09:25.556 12:17:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.556 12:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.556 12:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.557 ************************************ 00:09:25.557 START TEST nvmf_zcopy 00:09:25.557 ************************************ 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.557 * Looking for test storage... 00:09:25.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.557 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.817 --rc genhtml_branch_coverage=1 00:09:25.817 --rc genhtml_function_coverage=1 00:09:25.817 --rc genhtml_legend=1 00:09:25.817 --rc geninfo_all_blocks=1 00:09:25.817 --rc geninfo_unexecuted_blocks=1 00:09:25.817 00:09:25.817 ' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.817 --rc genhtml_branch_coverage=1 00:09:25.817 --rc genhtml_function_coverage=1 00:09:25.817 --rc genhtml_legend=1 00:09:25.817 --rc geninfo_all_blocks=1 00:09:25.817 --rc geninfo_unexecuted_blocks=1 00:09:25.817 00:09:25.817 ' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.817 --rc genhtml_branch_coverage=1 00:09:25.817 --rc genhtml_function_coverage=1 00:09:25.817 --rc genhtml_legend=1 00:09:25.817 --rc geninfo_all_blocks=1 00:09:25.817 --rc geninfo_unexecuted_blocks=1 00:09:25.817 00:09:25.817 ' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:25.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.817 --rc genhtml_branch_coverage=1 00:09:25.817 --rc genhtml_function_coverage=1 00:09:25.817 --rc genhtml_legend=1 00:09:25.817 --rc geninfo_all_blocks=1 00:09:25.817 --rc geninfo_unexecuted_blocks=1 00:09:25.817 00:09:25.817 ' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.817 12:17:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:32.390 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:32.390 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:32.390 Found net devices under 0000:86:00.0: cvl_0_0 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:32.390 Found net devices under 0000:86:00.1: cvl_0_1 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:32.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:09:32.390 00:09:32.390 --- 10.0.0.2 ping statistics --- 00:09:32.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.390 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:32.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:09:32.390 00:09:32.390 --- 10.0.0.1 ping statistics --- 00:09:32.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.390 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1504879 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1504879 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1504879 ']' 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.390 12:17:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.390 [2024-12-10 12:17:53.806592] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:32.390 [2024-12-10 12:17:53.806640] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.390 [2024-12-10 12:17:53.888851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.390 [2024-12-10 12:17:53.929013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.390 [2024-12-10 12:17:53.929049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.390 [2024-12-10 12:17:53.929056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.390 [2024-12-10 12:17:53.929067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.390 [2024-12-10 12:17:53.929072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.390 [2024-12-10 12:17:53.929638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.390 [2024-12-10 12:17:54.065889] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.390 [2024-12-10 12:17:54.082078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.390 malloc0 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:32.390 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:32.390 { 00:09:32.390 "params": { 00:09:32.390 "name": "Nvme$subsystem", 00:09:32.390 "trtype": "$TEST_TRANSPORT", 00:09:32.390 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:32.390 "adrfam": "ipv4", 00:09:32.390 "trsvcid": "$NVMF_PORT", 00:09:32.390 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:32.390 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:32.390 "hdgst": ${hdgst:-false}, 00:09:32.390 "ddgst": ${ddgst:-false} 00:09:32.390 }, 00:09:32.390 "method": "bdev_nvme_attach_controller" 00:09:32.390 } 00:09:32.390 EOF 00:09:32.391 )") 00:09:32.391 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:32.391 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:32.391 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:32.391 12:17:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:32.391 "params": { 00:09:32.391 "name": "Nvme1", 00:09:32.391 "trtype": "tcp", 00:09:32.391 "traddr": "10.0.0.2", 00:09:32.391 "adrfam": "ipv4", 00:09:32.391 "trsvcid": "4420", 00:09:32.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:32.391 "hdgst": false, 00:09:32.391 "ddgst": false 00:09:32.391 }, 00:09:32.391 "method": "bdev_nvme_attach_controller" 00:09:32.391 }' 00:09:32.391 [2024-12-10 12:17:54.160735] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:32.391 [2024-12-10 12:17:54.160776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504908 ] 00:09:32.391 [2024-12-10 12:17:54.234462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.391 [2024-12-10 12:17:54.274610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.649 Running I/O for 10 seconds... 00:09:34.519 8481.00 IOPS, 66.26 MiB/s [2024-12-10T11:17:57.623Z] 8542.50 IOPS, 66.74 MiB/s [2024-12-10T11:17:58.998Z] 8567.00 IOPS, 66.93 MiB/s [2024-12-10T11:17:59.934Z] 8576.25 IOPS, 67.00 MiB/s [2024-12-10T11:18:00.870Z] 8584.60 IOPS, 67.07 MiB/s [2024-12-10T11:18:01.807Z] 8570.50 IOPS, 66.96 MiB/s [2024-12-10T11:18:02.741Z] 8575.29 IOPS, 66.99 MiB/s [2024-12-10T11:18:03.676Z] 8586.00 IOPS, 67.08 MiB/s [2024-12-10T11:18:05.053Z] 8590.33 IOPS, 67.11 MiB/s [2024-12-10T11:18:05.053Z] 8595.80 IOPS, 67.15 MiB/s 00:09:42.885 Latency(us) 00:09:42.885 [2024-12-10T11:18:05.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.885 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:42.885 Verification LBA range: start 0x0 length 0x1000 00:09:42.885 Nvme1n1 : 10.01 8601.13 67.20 0.00 0.00 14839.27 277.82 22681.15 00:09:42.885 [2024-12-10T11:18:05.053Z] =================================================================================================================== 00:09:42.885 [2024-12-10T11:18:05.053Z] Total : 8601.13 67.20 0.00 0.00 14839.27 277.82 22681.15 00:09:42.885 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1506865 00:09:42.885 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:42.885 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.885 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:42.885 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:42.885 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:42.885 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.885 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.885 [2024-12-10 12:18:04.797979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.885 [2024-12-10 12:18:04.798016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.885 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.885 { 00:09:42.885 "params": { 00:09:42.885 "name": "Nvme$subsystem", 00:09:42.885 "trtype": "$TEST_TRANSPORT", 00:09:42.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.885 "adrfam": "ipv4", 00:09:42.885 "trsvcid": "$NVMF_PORT", 00:09:42.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.886 "hdgst": ${hdgst:-false}, 00:09:42.886 "ddgst": ${ddgst:-false} 00:09:42.886 }, 00:09:42.886 "method": "bdev_nvme_attach_controller" 00:09:42.886 } 00:09:42.886 EOF 00:09:42.886 )") 00:09:42.886 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:42.886 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:42.886 [2024-12-10 12:18:04.805975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.805998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:42.886 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.886 "params": { 00:09:42.886 "name": "Nvme1", 00:09:42.886 "trtype": "tcp", 00:09:42.886 "traddr": "10.0.0.2", 00:09:42.886 "adrfam": "ipv4", 00:09:42.886 "trsvcid": "4420", 00:09:42.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.886 "hdgst": false, 00:09:42.886 "ddgst": false 00:09:42.886 }, 00:09:42.886 "method": "bdev_nvme_attach_controller" 00:09:42.886 }' 00:09:42.886 [2024-12-10 12:18:04.813987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.814006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.822006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.822023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.830028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.830044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.840580] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:09:42.886 [2024-12-10 12:18:04.840622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506865 ] 00:09:42.886 [2024-12-10 12:18:04.842061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.842079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.850080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.850096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.858100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.858116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.866120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.866136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.874141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.874161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.882167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.882183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.890188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.890204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.898208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.898229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.906226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.906242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.914248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.914265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.915727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.886 [2024-12-10 12:18:04.922269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.922285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.930291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.930309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.938309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.938326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.946331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.946347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.954354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.954371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.956476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.886 [2024-12-10 12:18:04.962376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.962393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.970406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.970425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.978425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.978445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.986444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.986462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:04.994476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:04.994494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:05.002500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:05.002518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:05.010520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:05.010536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:05.018542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:05.018560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:05.026563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:05.026580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:05.034581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:05.034597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:05.042604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:05.042625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.886 [2024-12-10 12:18:05.050631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.886 [2024-12-10 12:18:05.050651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.058652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.058671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.066672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.066690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.074695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.074713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.082714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.082732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.090736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.090752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.098754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.098771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.106779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.106796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.114803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.114822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.122825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.122844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.130847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.130867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.138868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.138886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.146908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.146928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.188391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.188412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.195026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.195044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 Running I/O for 5 seconds... 00:09:43.146 [2024-12-10 12:18:05.203044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.203061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.215127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.215149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.223106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.223127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.232244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.232266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.241312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.241333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.249967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.249988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.258544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.258565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.268351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.268373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.277293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.277313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.286021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.286042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.294806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.294838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.146 [2024-12-10 12:18:05.303477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.146 [2024-12-10 12:18:05.303498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.312980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.313001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.322638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.322659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.329597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.329618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.339960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.339982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.348930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.348950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.357592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.357612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.366516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.366540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.375452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.375475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.385081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.385103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.394614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.394636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.404072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.404095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.414151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.414180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.423592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.423614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.432964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.432985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.442491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.442516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.451077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.451098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.460381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.460403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.469702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.469724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.479029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.479050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.487849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.487869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.497419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.497440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.506754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.506778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.515427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.515448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.524120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.524141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.532715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.532735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.542179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.542203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.551423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.551444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.560190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.560210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.569662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.569683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.578412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.578434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.587782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.587803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.596890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.596910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.441 [2024-12-10 12:18:05.606197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.441 [2024-12-10 12:18:05.606218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.615545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.615565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.625019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.625040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.634534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.634555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.643972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.643993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.652889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.652910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.660093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.660115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.670527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.670549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.679648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.679670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.688587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.688609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.697550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.697571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.707115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.707137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.716675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.716696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.725463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.725484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.734138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.734167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.743287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.743312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.752654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.752675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.761591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.761612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.770318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.770340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.779096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.779116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.788514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.788536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.797858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.797880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.806377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.806398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.815594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.815615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.825052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.825072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.834306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.834329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.841523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.841543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.851793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.851813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.701 [2024-12-10 12:18:05.858853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.701 [2024-12-10 12:18:05.858873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.869481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.869503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.878382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.878402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.887709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.887730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.897235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.897255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.906452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.906473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.915170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.915196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.924504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.924526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.931729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.931749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.942278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.942298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.951605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.951626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.960322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.960341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.969838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.969858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.960 [2024-12-10 12:18:05.978679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.960 [2024-12-10 12:18:05.978700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:05.987907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:05.987927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:05.997252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:05.997273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.006189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.006210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.015221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.015243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.023963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.023983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.032673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.032693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.041397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.041417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.050728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.050748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.059287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.059307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.067977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.067997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.076718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.076739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.085585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.085610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.094329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.094350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.103116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.103137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.111876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.111899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.961 [2024-12-10 12:18:06.121103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.961 [2024-12-10 12:18:06.121124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.128115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.128136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.139505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.139526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.148305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.148324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.157488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.157508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.166973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.166994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.175699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.175723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.185080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.185102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.192072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.192093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.202807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.202829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 16156.00 IOPS, 126.22 MiB/s [2024-12-10T11:18:06.387Z] [2024-12-10 12:18:06.212127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.212148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.221421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.221443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.230730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.230750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.240089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.240109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.249539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.249559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.256504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.256527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.266907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.266928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.276352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.276372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.285607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.285627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.294735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.294757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.304169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.304190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.311109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.311128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.321614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.321635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.330265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.330286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.338955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.338977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.345970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.345991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.356679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.356701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.365504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.365525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.374135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.374164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.219 [2024-12-10 12:18:06.382907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.219 [2024-12-10 12:18:06.382929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.391805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.391826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.400760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.400780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.409507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.409527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.418779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.418800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.427365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.427384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.434242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.434262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.444414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.444435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.453343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.453364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.462627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.462647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.469563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.469583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.479989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.480010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.489133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.489153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.497945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.497965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.506552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.506572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.515804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.515824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.524555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.524576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.533085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.533105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.541715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.541736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.478 [2024-12-10 12:18:06.548627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.478 [2024-12-10 12:18:06.548648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.479 [2024-12-10 12:18:06.559759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.479 [2024-12-10 12:18:06.559780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.479 [2024-12-10 12:18:06.568606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.479 [2024-12-10 12:18:06.568627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.479 [2024-12-10 12:18:06.577004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.479 [2024-12-10 12:18:06.577024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.479 [2024-12-10 12:18:06.585801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.479 [2024-12-10 12:18:06.585821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.479 [2024-12-10 12:18:06.594588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.479 [2024-12-10 12:18:06.594609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.479 [2024-12-10 12:18:06.603468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.479 [2024-12-10 12:18:06.603488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.479 [2024-12-10 12:18:06.612909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.479 [2024-12-10 12:18:06.612930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.479 [2024-12-10 12:18:06.621493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.479 [2024-12-10 12:18:06.621513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.479 [2024-12-10 12:18:06.630790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.479 [2024-12-10 12:18:06.630811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.479 [2024-12-10 12:18:06.640211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.479 [2024-12-10 12:18:06.640232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.649099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.649120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.658544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.658565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.667840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.667862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.676440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.676460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.685245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.685266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.694569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.694590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.703863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.703884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.712554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.712574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.721938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.721975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.731363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.731385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.739999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.740019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.749234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.749254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.758082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.758102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.766947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.766968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.776511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.776534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.786011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.786033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.795532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.795554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.804988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.805009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.813836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.813858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.822530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.822552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.832038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.832060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.840797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.840819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.849751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.849773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.859187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.859208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.868460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.868482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.877887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.877909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.887267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.887288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.738 [2024-12-10 12:18:06.896568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.738 [2024-12-10 12:18:06.896589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.905320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.905341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.914181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.914203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.922932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.922953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.931714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.931739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.941184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.941205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.949900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.949921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.959400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.959421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.968930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.968951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.977766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.977787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.986562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.986583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:06.995453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:06.995474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.004317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.004339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.011344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.011364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.022415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.022436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.031459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.031479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.040449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.040469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.049182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.049203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.058310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.058332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.067383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.067405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.077343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.077364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.085913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.085934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.094902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.094923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.104349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.104374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.113196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.113217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.122484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.122505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.131312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.131333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.140063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.140083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.149351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.149372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.998 [2024-12-10 12:18:07.158614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.998 [2024-12-10 12:18:07.158634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.165688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.165708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.176601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.176621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.185635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.185655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.195004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.195023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.204251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.204271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 16258.00 IOPS, 127.02 MiB/s [2024-12-10T11:18:07.426Z] [2024-12-10 12:18:07.212780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.212801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.221410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.221432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.230760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.230781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.240070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.240090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.249511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.249531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.256443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.256463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.267478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.267498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.276885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.276910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.286283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.286304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.295700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.295720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.304994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.305014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.312275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.312295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.322669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.322690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.331605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.331626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.340896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.340916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.350302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.350322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.357574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.357595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.367864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.367884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.376902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.376923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.385648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.385668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.394362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.394382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.401821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.401840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.412303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.412323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.258 [2024-12-10 12:18:07.421210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.258 [2024-12-10 12:18:07.421231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.430713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.430733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.439420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.439440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.448370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.448392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.455517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.455537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.467073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.467093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.476525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.476545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.485834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.485854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.495187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.495208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.503839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.503860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.513289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.513309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.521995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.522016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.531339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.531358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.540543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.540564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.549860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.549880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.558526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.558547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.567072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.567092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.576577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.576598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.585891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.585912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.595275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.595297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.604241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.604261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.613191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.613212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.622085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.622106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.630783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.630803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.640009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.640029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.649136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.649162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.658457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.658477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.667770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.667791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.518 [2024-12-10 12:18:07.676419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.518 [2024-12-10 12:18:07.676439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.685169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.685190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.692429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.692449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.702721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.702741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.711491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.711511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.718350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.718370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.729701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.729722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.739165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.739186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.748100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.748120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.756776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.756796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.764148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.764175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.774630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.774651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.783580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.777 [2024-12-10 12:18:07.783600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.777 [2024-12-10 12:18:07.792136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.792163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.801546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.801567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.808449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.808469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.819665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.819685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.828499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.828520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.837068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.837088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.846332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.846352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.855777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.855797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.864531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.864552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.873876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.873896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.883278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.883298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.891864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.891883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.901187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.901207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.908214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.908233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.919657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.919677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.928581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.928601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.778 [2024-12-10 12:18:07.937134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.778 [2024-12-10 12:18:07.937154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:07.946013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:07.946034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:07.955379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:07.955399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:07.964652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:07.964673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:07.973313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:07.973333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:07.982055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:07.982075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:07.990791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:07.990811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:07.999566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:07.999587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.008918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.008939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.018120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.018141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.027562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.027582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.036754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.036774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.046039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.046060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.055348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.055368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.063973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.063993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.073273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.073295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.082342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.082364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.091066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.091088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.098022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.098043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.109251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.109272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.118928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.118948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.125956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.125983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.136303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.136326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.145692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.145713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.154324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.154344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.163800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.163822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.170828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.170850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.182140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.182171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.191571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.191593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.037 [2024-12-10 12:18:08.200473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.037 [2024-12-10 12:18:08.200495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 16310.00 IOPS, 127.42 MiB/s [2024-12-10T11:18:08.466Z] [2024-12-10 12:18:08.209977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.209999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.218612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.218634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.228115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.228136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.236966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.236986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.243870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.243891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.255091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.255113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.263883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.263904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.273114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.273135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.282704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.282726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.292306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.292328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.301497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.301522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.310860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.310880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.320042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.320063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.328771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.328792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.337495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.337515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.346777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.346799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.355351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.355373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.364677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.364698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.373484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.373505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.382739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.382760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.392167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.392205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.399172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.399191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.410361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.410382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.420003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.420023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.429005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.429026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.438307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.438329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.446939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.446960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.298 [2024-12-10 12:18:08.456182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.298 [2024-12-10 12:18:08.456203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.557 [2024-12-10 12:18:08.464862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.557 [2024-12-10 12:18:08.464883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.557 [2024-12-10 12:18:08.474275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.557 [2024-12-10 12:18:08.474301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.557 [2024-12-10 12:18:08.483594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.557 [2024-12-10 12:18:08.483615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.557 [2024-12-10 12:18:08.492338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.557 [2024-12-10 12:18:08.492359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.557 [2024-12-10 12:18:08.500989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.557 [2024-12-10 12:18:08.501009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.557 [2024-12-10 12:18:08.510368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.557 [2024-12-10 12:18:08.510389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.557 [2024-12-10 12:18:08.519648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.557 [2024-12-10 12:18:08.519669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.557 [2024-12-10 12:18:08.528995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.557 [2024-12-10 12:18:08.529016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.538208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.538229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.546818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.546839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.555607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.555628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.565093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.565114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.574675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.574696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.583510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.583530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.592962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.592983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.601632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.601653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.611018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.611038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.619838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.619859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.628543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.628564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.637270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.637290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.646013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.646033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.655233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.655254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.664526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.664546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.673237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.673257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.681876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.681896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.691432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.691452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.698360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.698380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.709065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.709086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.558 [2024-12-10 12:18:08.718708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.558 [2024-12-10 12:18:08.718729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.727690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.727711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.736441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.736461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.743365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.743385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.753745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.753766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.762665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.762686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.771444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.771464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.780742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.780762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.789482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.789502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.798139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.798165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.805177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.805197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.816214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.816235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.824107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.824127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.833620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.833640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.843171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.843192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.851886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.851906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.861238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.861259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.870227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.870248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.879569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.879590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.886707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.886728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.897357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.897378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.906389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.906409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.915008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.915029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.923655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.923676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.932388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.932409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.941849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.817 [2024-12-10 12:18:08.941869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.817 [2024-12-10 12:18:08.951414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.818 [2024-12-10 12:18:08.951435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.818 [2024-12-10 12:18:08.960777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.818 [2024-12-10 12:18:08.960797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.818 [2024-12-10 12:18:08.970170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.818 [2024-12-10 12:18:08.970190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.818 [2024-12-10 12:18:08.979255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.818 [2024-12-10 12:18:08.979276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:08.987965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:08.987986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:08.996748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:08.996768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.005523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.005544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.014211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.014233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.022875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.022896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.032153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.032180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.040912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.040932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.050321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.050341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.058954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.058974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.068394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.068415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.075908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.075929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.086979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.087000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.096502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.096524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.106559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.106580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.116004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.116024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.124811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.124832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.133699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.133720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.143179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.143199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.152076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.152097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.160786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.160806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.170066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.170086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.179690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.179710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.186606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.186625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.197414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.197435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.204350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.204369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 16318.00 IOPS, 127.48 MiB/s [2024-12-10T11:18:09.245Z] [2024-12-10 12:18:09.215536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.215557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.233555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.233575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.077 [2024-12-10 12:18:09.242646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.077 [2024-12-10 12:18:09.242666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.336 [2024-12-10 12:18:09.251999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.336 [2024-12-10 12:18:09.252019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.336 [2024-12-10 12:18:09.261305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.336 [2024-12-10 12:18:09.261336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.336 [2024-12-10 12:18:09.268517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.336 [2024-12-10 12:18:09.268538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.336 [2024-12-10 12:18:09.278752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.336 [2024-12-10 12:18:09.278773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.336 [2024-12-10 12:18:09.287553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.287573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.296506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.296526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.305058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.305078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.313710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.313731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.322376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.322396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.329329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.329354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.340471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.340492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.350053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.350073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.359210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.359230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.368563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.368584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.377984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.378004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.386710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.386731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.396302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.396323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.405177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.405197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.413866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.413887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.422482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.422503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.431186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.431207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.439729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.439751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.449205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.449227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.457844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.457866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.467279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.467300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.476012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.476033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.485239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.485260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.337 [2024-12-10 12:18:09.492178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.337 [2024-12-10 12:18:09.492214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.503675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.503700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.513205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.513227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.522398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.522420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.531198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.531219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.540450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.540473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.549841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.549863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.559194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.559216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.567932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.567955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.576698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.576720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.585368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.585389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.594007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.594027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.602686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.602707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.611359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.611380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.620871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.620893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.630330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.630352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.638984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.639005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.648307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.648328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.657901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.657922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.665331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.665352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.676487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.596 [2024-12-10 12:18:09.676512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.596 [2024-12-10 12:18:09.685353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.597 [2024-12-10 12:18:09.685374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.597 [2024-12-10 12:18:09.694588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.597 [2024-12-10 12:18:09.694609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.597 [2024-12-10 12:18:09.703351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.597 [2024-12-10 12:18:09.703372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.597 [2024-12-10 12:18:09.712842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.597 [2024-12-10 12:18:09.712864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.597 [2024-12-10 12:18:09.722168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.597 [2024-12-10 12:18:09.722206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.597 [2024-12-10 12:18:09.731423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.597 [2024-12-10 12:18:09.731444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.597 [2024-12-10 12:18:09.740462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.597 [2024-12-10 12:18:09.740484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.597 [2024-12-10 12:18:09.749071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.597 [2024-12-10 12:18:09.749091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.597 [2024-12-10 12:18:09.757845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.597 [2024-12-10 12:18:09.757866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.767523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.767544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.776286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.776307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.785545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.785567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.794773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.794794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.804330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.804350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.812914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.812936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.820354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.820375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.830572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.830593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.839532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.839553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.848346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.848367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.857775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.857796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.864773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.864794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.875132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.875154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.883989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.884009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.890977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.890997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.901431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.901452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.910791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.910811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.920015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.920036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.928826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.928847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.937492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.937513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.946328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.946349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.955867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.955888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.965354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.965375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.974833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.974853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.984177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.984197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:09.993409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:09.993430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:10.002825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:10.002848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:10.011771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:10.011792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.856 [2024-12-10 12:18:10.021070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.856 [2024-12-10 12:18:10.021090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.030117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.030137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.039728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.039750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.049050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.049071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.057884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.057905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.065170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.065190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.075983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.076003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.085977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.085998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.095389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.095410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.104240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.104261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.113743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.113764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.122458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.122479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.131306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.131326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.140145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.116 [2024-12-10 12:18:10.140171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.116 [2024-12-10 12:18:10.149807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.149828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.158915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.158936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.167722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.167743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.174839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.174859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.185541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.185562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.192836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.192856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.202830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.202851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.211736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.211756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 16316.00 IOPS, 127.47 MiB/s [2024-12-10T11:18:10.285Z] [2024-12-10 12:18:10.218641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.218662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 00:09:48.117 Latency(us) 00:09:48.117 [2024-12-10T11:18:10.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.117 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:48.117 Nvme1n1 : 5.01 16318.54 127.49 0.00 0.00 7836.17 3048.85 19717.79 00:09:48.117 [2024-12-10T11:18:10.285Z] =================================================================================================================== 00:09:48.117 [2024-12-10T11:18:10.285Z] Total : 16318.54 127.49 0.00 0.00 7836.17 3048.85 19717.79 00:09:48.117 [2024-12-10 12:18:10.226339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.226358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.234361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.234380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.242381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.242397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.250413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.250436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.258429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.258448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.266449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.266466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.274472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.274490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.117 [2024-12-10 12:18:10.282491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.117 [2024-12-10 12:18:10.282507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.290511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.290531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.298534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.298551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.306555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.306576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.314579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.314605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.322596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.322612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.330618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.330635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.338639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.338654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.346661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.346678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.354682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.354698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.362704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.362720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.370722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.370738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 [2024-12-10 12:18:10.378743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.376 [2024-12-10 12:18:10.378759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1506865) - No such process 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1506865 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.376 delay0 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.376 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:48.376 [2024-12-10 12:18:10.527985] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:54.938 [2024-12-10 12:18:16.662906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7299b0 is same with the state(6) to be set 00:09:54.938 Initializing NVMe Controllers 00:09:54.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:54.938 Initialization complete. Launching workers. 00:09:54.938 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 103 00:09:54.938 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 393, failed to submit 30 00:09:54.939 success 211, unsuccessful 182, failed 0 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.939 rmmod nvme_tcp 00:09:54.939 rmmod nvme_fabrics 00:09:54.939 rmmod nvme_keyring 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1504879 ']' 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1504879 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1504879 ']' 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1504879 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1504879 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1504879' 00:09:54.939 killing process with pid 1504879 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1504879 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1504879 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.939 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.476 00:09:57.476 real 0m31.482s 00:09:57.476 user 0m42.127s 00:09:57.476 sys 0m10.702s 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:57.476 ************************************ 00:09:57.476 END TEST nvmf_zcopy 00:09:57.476 ************************************ 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.476 ************************************ 00:09:57.476 START TEST nvmf_nmic 00:09:57.476 ************************************ 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:57.476 * Looking for test storage... 00:09:57.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:57.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.476 --rc genhtml_branch_coverage=1 00:09:57.476 --rc genhtml_function_coverage=1 00:09:57.476 --rc genhtml_legend=1 00:09:57.476 --rc geninfo_all_blocks=1 00:09:57.476 --rc geninfo_unexecuted_blocks=1 00:09:57.476 00:09:57.476 ' 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:57.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.476 --rc genhtml_branch_coverage=1 00:09:57.476 --rc genhtml_function_coverage=1 00:09:57.476 --rc genhtml_legend=1 00:09:57.476 --rc geninfo_all_blocks=1 00:09:57.476 --rc geninfo_unexecuted_blocks=1 00:09:57.476 00:09:57.476 ' 00:09:57.476 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:57.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.476 --rc genhtml_branch_coverage=1 00:09:57.476 --rc genhtml_function_coverage=1 00:09:57.476 --rc genhtml_legend=1 00:09:57.476 --rc geninfo_all_blocks=1 00:09:57.476 --rc geninfo_unexecuted_blocks=1 00:09:57.477 00:09:57.477 ' 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:57.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.477 --rc genhtml_branch_coverage=1 00:09:57.477 --rc genhtml_function_coverage=1 00:09:57.477 --rc genhtml_legend=1 00:09:57.477 --rc geninfo_all_blocks=1 00:09:57.477 --rc geninfo_unexecuted_blocks=1 00:09:57.477 00:09:57.477 ' 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.477 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:02.939 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:02.940 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:02.940 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:02.940 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:02.940 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.940 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.940 12:18:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:02.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:02.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:02.940 Found net devices under 0000:86:00.0: cvl_0_0 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:02.940 Found net devices under 0000:86:00.1: cvl_0_1 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.940 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:10:03.199 00:10:03.199 --- 10.0.0.2 ping statistics --- 00:10:03.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.199 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:10:03.199 00:10:03.199 --- 10.0.0.1 ping statistics --- 00:10:03.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.199 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1512871 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1512871 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1512871 ']' 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.199 12:18:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.200 [2024-12-10 12:18:25.360307] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:03.200 [2024-12-10 12:18:25.360350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.458 [2024-12-10 12:18:25.440518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.458 [2024-12-10 12:18:25.481375] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.458 [2024-12-10 12:18:25.481414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.458 [2024-12-10 12:18:25.481421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.458 [2024-12-10 12:18:25.481427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.458 [2024-12-10 12:18:25.481431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.458 [2024-12-10 12:18:25.483041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.458 [2024-12-10 12:18:25.483150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.458 [2024-12-10 12:18:25.483258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.458 [2024-12-10 12:18:25.483267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.023 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.023 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:04.023 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:04.023 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.282 [2024-12-10 12:18:26.227485] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.282 Malloc0 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.282 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.283 [2024-12-10 12:18:26.290618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:04.283 test case1: single bdev can't be used in multiple subsystems 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.283 [2024-12-10 12:18:26.318519] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:04.283 [2024-12-10 12:18:26.318538] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:04.283 [2024-12-10 12:18:26.318546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.283 request: 00:10:04.283 { 00:10:04.283 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:04.283 "namespace": { 00:10:04.283 "bdev_name": "Malloc0", 00:10:04.283 "no_auto_visible": false, 00:10:04.283 "hide_metadata": false 00:10:04.283 }, 00:10:04.283 "method": "nvmf_subsystem_add_ns", 00:10:04.283 "req_id": 1 00:10:04.283 } 00:10:04.283 Got JSON-RPC error response 00:10:04.283 response: 00:10:04.283 { 00:10:04.283 "code": -32602, 00:10:04.283 "message": "Invalid parameters" 00:10:04.283 } 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:04.283 Adding namespace failed - expected result. 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:04.283 test case2: host connect to nvmf target in multiple paths 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.283 [2024-12-10 12:18:26.330659] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.283 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.657 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:06.591 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:06.591 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:06.591 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:06.591 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:06.591 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:09.118 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:09.118 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:09.118 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.118 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:09.118 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.118 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:09.118 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:09.118 [global] 00:10:09.119 thread=1 00:10:09.119 invalidate=1 00:10:09.119 rw=write 00:10:09.119 time_based=1 00:10:09.119 runtime=1 00:10:09.119 ioengine=libaio 00:10:09.119 direct=1 00:10:09.119 bs=4096 00:10:09.119 iodepth=1 00:10:09.119 norandommap=0 00:10:09.119 numjobs=1 00:10:09.119 00:10:09.119 verify_dump=1 00:10:09.119 verify_backlog=512 00:10:09.119 verify_state_save=0 00:10:09.119 do_verify=1 00:10:09.119 verify=crc32c-intel 00:10:09.119 [job0] 00:10:09.119 filename=/dev/nvme0n1 00:10:09.119 Could not set queue depth (nvme0n1) 00:10:09.119 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.119 fio-3.35 00:10:09.119 Starting 1 thread 00:10:10.052 00:10:10.052 job0: (groupid=0, jobs=1): err= 0: pid=1513955: Tue Dec 10 12:18:32 2024 00:10:10.052 read: IOPS=22, BW=90.6KiB/s (92.8kB/s)(92.0KiB/1015msec) 00:10:10.052 slat (nsec): min=9380, max=23281, avg=21833.65, stdev=3023.07 00:10:10.052 clat (usec): min=40903, max=41086, avg=40967.85, stdev=44.92 00:10:10.052 lat (usec): min=40925, max=41107, avg=40989.68, stdev=44.02 00:10:10.052 clat percentiles (usec): 00:10:10.052 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:10.052 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:10.052 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:10.052 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:10.052 | 99.99th=[41157] 00:10:10.052 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:10.052 slat (nsec): min=9250, max=40247, avg=10220.62, stdev=1884.98 00:10:10.052 clat (usec): min=114, max=359, avg=129.29, stdev=13.76 00:10:10.052 lat (usec): min=124, max=399, avg=139.51, stdev=14.91 00:10:10.052 clat percentiles (usec): 00:10:10.052 | 1.00th=[ 119], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 124], 00:10:10.052 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 128], 00:10:10.053 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 149], 00:10:10.053 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 359], 99.95th=[ 359], 00:10:10.053 | 99.99th=[ 359] 00:10:10.053 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.053 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.053 lat (usec) : 250=95.51%, 500=0.19% 00:10:10.053 lat (msec) : 50=4.30% 00:10:10.053 cpu : usr=0.49%, sys=0.30%, ctx=535, majf=0, minf=1 00:10:10.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.053 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.053 00:10:10.053 Run status group 0 (all jobs): 00:10:10.053 READ: bw=90.6KiB/s (92.8kB/s), 90.6KiB/s-90.6KiB/s (92.8kB/s-92.8kB/s), io=92.0KiB (94.2kB), run=1015-1015msec 00:10:10.053 WRITE: bw=2018KiB/s (2066kB/s), 2018KiB/s-2018KiB/s (2066kB/s-2066kB/s), io=2048KiB (2097kB), run=1015-1015msec 00:10:10.053 00:10:10.053 Disk stats (read/write): 00:10:10.053 nvme0n1: ios=70/512, merge=0/0, ticks=833/67, in_queue=900, util=91.28% 00:10:10.053 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.311 rmmod nvme_tcp 00:10:10.311 rmmod nvme_fabrics 00:10:10.311 rmmod nvme_keyring 00:10:10.311 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1512871 ']' 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1512871 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1512871 ']' 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1512871 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1512871 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1512871' 00:10:10.570 killing process with pid 1512871 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1512871 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1512871 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.570 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:13.107 00:10:13.107 real 0m15.696s 00:10:13.107 user 0m36.275s 00:10:13.107 sys 0m5.232s 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:13.107 ************************************ 00:10:13.107 END TEST nvmf_nmic 00:10:13.107 ************************************ 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.107 ************************************ 00:10:13.107 START TEST nvmf_fio_target 00:10:13.107 ************************************ 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:13.107 * Looking for test storage... 00:10:13.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:13.107 12:18:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:13.107 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:13.107 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:13.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.108 --rc genhtml_branch_coverage=1 00:10:13.108 --rc genhtml_function_coverage=1 00:10:13.108 --rc genhtml_legend=1 00:10:13.108 --rc geninfo_all_blocks=1 00:10:13.108 --rc geninfo_unexecuted_blocks=1 00:10:13.108 00:10:13.108 ' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:13.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.108 --rc genhtml_branch_coverage=1 00:10:13.108 --rc genhtml_function_coverage=1 00:10:13.108 --rc genhtml_legend=1 00:10:13.108 --rc geninfo_all_blocks=1 00:10:13.108 --rc geninfo_unexecuted_blocks=1 00:10:13.108 00:10:13.108 ' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:13.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.108 --rc genhtml_branch_coverage=1 00:10:13.108 --rc genhtml_function_coverage=1 00:10:13.108 --rc genhtml_legend=1 00:10:13.108 --rc geninfo_all_blocks=1 00:10:13.108 --rc geninfo_unexecuted_blocks=1 00:10:13.108 00:10:13.108 ' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:13.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.108 --rc genhtml_branch_coverage=1 00:10:13.108 --rc genhtml_function_coverage=1 00:10:13.108 --rc genhtml_legend=1 00:10:13.108 --rc geninfo_all_blocks=1 00:10:13.108 --rc geninfo_unexecuted_blocks=1 00:10:13.108 00:10:13.108 ' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.108 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.109 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.109 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.109 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.109 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:13.109 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:13.109 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:13.109 12:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.679 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:19.680 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:19.680 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:19.680 Found net devices under 0000:86:00.0: cvl_0_0 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:19.680 Found net devices under 0000:86:00.1: cvl_0_1 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:19.680 12:18:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:19.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:10:19.680 00:10:19.680 --- 10.0.0.2 ping statistics --- 00:10:19.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.680 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:19.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:10:19.680 00:10:19.680 --- 10.0.0.1 ping statistics --- 00:10:19.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.680 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1517725 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1517725 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1517725 ']' 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.680 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.680 [2024-12-10 12:18:41.250183] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:19.681 [2024-12-10 12:18:41.250227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.681 [2024-12-10 12:18:41.328473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.681 [2024-12-10 12:18:41.369937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.681 [2024-12-10 12:18:41.369975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.681 [2024-12-10 12:18:41.369981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.681 [2024-12-10 12:18:41.369988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.681 [2024-12-10 12:18:41.369993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.681 [2024-12-10 12:18:41.371394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.681 [2024-12-10 12:18:41.371523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.681 [2024-12-10 12:18:41.371631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.681 [2024-12-10 12:18:41.371633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:19.681 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.681 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:19.681 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.681 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.681 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.681 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.681 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:19.681 [2024-12-10 12:18:41.673259] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.681 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.939 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:19.939 12:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.198 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:20.198 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.456 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:20.456 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.456 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:20.456 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:20.715 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.973 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:20.973 12:18:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.232 12:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:21.232 12:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.490 12:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:21.490 12:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:21.490 12:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:21.748 12:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:21.748 12:18:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:22.006 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:22.006 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:22.265 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.265 [2024-12-10 12:18:44.391413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.265 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:22.523 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:22.782 12:18:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:24.155 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:24.155 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:24.155 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:24.155 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:24.155 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:24.155 12:18:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:26.057 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:26.057 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:26.057 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.057 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:26.057 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.057 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:26.057 12:18:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:26.057 [global] 00:10:26.057 thread=1 00:10:26.057 invalidate=1 00:10:26.057 rw=write 00:10:26.057 time_based=1 00:10:26.057 runtime=1 00:10:26.057 ioengine=libaio 00:10:26.057 direct=1 00:10:26.057 bs=4096 00:10:26.057 iodepth=1 00:10:26.057 norandommap=0 00:10:26.057 numjobs=1 00:10:26.057 00:10:26.057 verify_dump=1 00:10:26.057 verify_backlog=512 00:10:26.057 verify_state_save=0 00:10:26.057 do_verify=1 00:10:26.057 verify=crc32c-intel 00:10:26.057 [job0] 00:10:26.057 filename=/dev/nvme0n1 00:10:26.057 [job1] 00:10:26.057 filename=/dev/nvme0n2 00:10:26.057 [job2] 00:10:26.057 filename=/dev/nvme0n3 00:10:26.057 [job3] 00:10:26.057 filename=/dev/nvme0n4 00:10:26.057 Could not set queue depth (nvme0n1) 00:10:26.057 Could not set queue depth (nvme0n2) 00:10:26.057 Could not set queue depth (nvme0n3) 00:10:26.057 Could not set queue depth (nvme0n4) 00:10:26.317 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.317 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.317 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.317 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.317 fio-3.35 00:10:26.317 Starting 4 threads 00:10:27.714 00:10:27.714 job0: (groupid=0, jobs=1): err= 0: pid=1519075: Tue Dec 10 12:18:49 2024 00:10:27.714 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:10:27.714 slat (nsec): min=9539, max=23819, avg=21333.91, stdev=3651.58 00:10:27.714 clat (usec): min=40833, max=42086, avg=41343.31, stdev=498.13 00:10:27.714 lat (usec): min=40855, max=42109, avg=41364.64, stdev=499.11 00:10:27.714 clat percentiles (usec): 00:10:27.714 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:27.714 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:27.714 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:27.714 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:27.714 | 99.99th=[42206] 00:10:27.714 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:10:27.714 slat (nsec): min=9958, max=36727, avg=11089.29, stdev=1487.12 00:10:27.714 clat (usec): min=134, max=393, avg=179.87, stdev=31.92 00:10:27.714 lat (usec): min=147, max=404, avg=190.96, stdev=32.15 00:10:27.714 clat percentiles (usec): 00:10:27.714 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:10:27.714 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 180], 00:10:27.714 | 70.00th=[ 188], 80.00th=[ 198], 90.00th=[ 239], 95.00th=[ 243], 00:10:27.714 | 99.00th=[ 251], 99.50th=[ 330], 99.90th=[ 396], 99.95th=[ 396], 00:10:27.714 | 99.99th=[ 396] 00:10:27.714 bw ( KiB/s): min= 4096, max= 4096, per=23.11%, avg=4096.00, stdev= 0.00, samples=1 00:10:27.714 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:27.714 lat (usec) : 250=94.38%, 500=1.50% 00:10:27.714 lat (msec) : 50=4.12% 00:10:27.714 cpu : usr=0.20%, sys=0.60%, ctx=535, majf=0, minf=1 00:10:27.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.714 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.714 job1: (groupid=0, jobs=1): err= 0: pid=1519076: Tue Dec 10 12:18:49 2024 00:10:27.714 read: IOPS=896, BW=3584KiB/s (3670kB/s)(3692KiB/1030msec) 00:10:27.714 slat (nsec): min=6474, max=29145, avg=7831.09, stdev=2110.37 00:10:27.714 clat (usec): min=168, max=42234, avg=856.13, stdev=5039.16 00:10:27.714 lat (usec): min=176, max=42244, avg=863.96, stdev=5040.79 00:10:27.714 clat percentiles (usec): 00:10:27.714 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:10:27.714 | 30.00th=[ 212], 40.00th=[ 231], 50.00th=[ 237], 60.00th=[ 243], 00:10:27.714 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 269], 00:10:27.714 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:27.714 | 99.99th=[42206] 00:10:27.714 write: IOPS=994, BW=3977KiB/s (4072kB/s)(4096KiB/1030msec); 0 zone resets 00:10:27.714 slat (usec): min=9, max=37758, avg=71.71, stdev=1406.05 00:10:27.714 clat (usec): min=111, max=328, avg=150.59, stdev=23.05 00:10:27.714 lat (usec): min=121, max=37937, avg=222.30, stdev=1407.51 00:10:27.714 clat percentiles (usec): 00:10:27.714 | 1.00th=[ 117], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 133], 00:10:27.714 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 145], 60.00th=[ 149], 00:10:27.714 | 70.00th=[ 155], 80.00th=[ 169], 90.00th=[ 186], 95.00th=[ 196], 00:10:27.714 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 273], 99.95th=[ 330], 00:10:27.714 | 99.99th=[ 330] 00:10:27.714 bw ( KiB/s): min= 4096, max= 4096, per=23.11%, avg=4096.00, stdev= 0.00, samples=2 00:10:27.714 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:10:27.714 lat (usec) : 250=87.42%, 500=11.86% 00:10:27.714 lat (msec) : 50=0.72% 00:10:27.714 cpu : usr=0.58%, sys=2.14%, ctx=1951, majf=0, minf=1 00:10:27.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.714 issued rwts: total=923,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.714 job2: (groupid=0, jobs=1): err= 0: pid=1519077: Tue Dec 10 12:18:49 2024 00:10:27.714 read: IOPS=1000, BW=4004KiB/s (4100kB/s)(4164KiB/1040msec) 00:10:27.714 slat (nsec): min=6650, max=35281, avg=8987.81, stdev=2317.31 00:10:27.714 clat (usec): min=187, max=42249, avg=722.73, stdev=4380.63 00:10:27.714 lat (usec): min=196, max=42256, avg=731.72, stdev=4381.23 00:10:27.714 clat percentiles (usec): 00:10:27.714 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 231], 00:10:27.714 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:10:27.714 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 281], 00:10:27.714 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:27.714 | 99.99th=[42206] 00:10:27.714 write: IOPS=1476, BW=5908KiB/s (6049kB/s)(6144KiB/1040msec); 0 zone resets 00:10:27.714 slat (nsec): min=9474, max=48966, avg=12289.07, stdev=3007.24 00:10:27.714 clat (usec): min=115, max=364, avg=164.09, stdev=25.20 00:10:27.714 lat (usec): min=125, max=403, avg=176.38, stdev=26.42 00:10:27.714 clat percentiles (usec): 00:10:27.714 | 1.00th=[ 125], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 141], 00:10:27.714 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 163], 60.00th=[ 169], 00:10:27.714 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 204], 00:10:27.714 | 99.00th=[ 225], 99.50th=[ 258], 99.90th=[ 343], 99.95th=[ 363], 00:10:27.714 | 99.99th=[ 363] 00:10:27.714 bw ( KiB/s): min= 4096, max= 8192, per=34.67%, avg=6144.00, stdev=2896.31, samples=2 00:10:27.714 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:27.714 lat (usec) : 250=85.22%, 500=14.24% 00:10:27.714 lat (msec) : 2=0.04%, 10=0.04%, 50=0.47% 00:10:27.714 cpu : usr=1.64%, sys=3.18%, ctx=2578, majf=0, minf=2 00:10:27.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.714 issued rwts: total=1041,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.714 job3: (groupid=0, jobs=1): err= 0: pid=1519078: Tue Dec 10 12:18:49 2024 00:10:27.714 read: IOPS=1013, BW=4055KiB/s (4152kB/s)(4164KiB/1027msec) 00:10:27.714 slat (nsec): min=7208, max=32011, avg=9025.17, stdev=2213.89 00:10:27.714 clat (usec): min=174, max=41045, avg=691.05, stdev=4323.23 00:10:27.714 lat (usec): min=182, max=41053, avg=700.08, stdev=4323.47 00:10:27.714 clat percentiles (usec): 00:10:27.714 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:10:27.714 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:10:27.714 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 255], 00:10:27.714 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:10:27.714 | 99.99th=[41157] 00:10:27.714 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:10:27.714 slat (nsec): min=10191, max=43434, avg=12183.21, stdev=2427.51 00:10:27.714 clat (usec): min=120, max=365, avg=176.86, stdev=29.93 00:10:27.714 lat (usec): min=131, max=376, avg=189.04, stdev=30.38 00:10:27.714 clat percentiles (usec): 00:10:27.714 | 1.00th=[ 129], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 157], 00:10:27.714 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:10:27.714 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 239], 00:10:27.714 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 338], 99.95th=[ 367], 00:10:27.714 | 99.99th=[ 367] 00:10:27.714 bw ( KiB/s): min= 4096, max= 8192, per=34.67%, avg=6144.00, stdev=2896.31, samples=2 00:10:27.714 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:27.714 lat (usec) : 250=94.99%, 500=4.50% 00:10:27.714 lat (msec) : 2=0.04%, 50=0.47% 00:10:27.714 cpu : usr=2.05%, sys=4.29%, ctx=2577, majf=0, minf=2 00:10:27.714 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.714 issued rwts: total=1041,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.714 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.714 00:10:27.714 Run status group 0 (all jobs): 00:10:27.714 READ: bw=11.4MiB/s (11.9MB/s), 87.2KiB/s-4055KiB/s (89.3kB/s-4152kB/s), io=11.8MiB (12.4MB), run=1009-1040msec 00:10:27.714 WRITE: bw=17.3MiB/s (18.1MB/s), 2030KiB/s-5982KiB/s (2078kB/s-6126kB/s), io=18.0MiB (18.9MB), run=1009-1040msec 00:10:27.714 00:10:27.714 Disk stats (read/write): 00:10:27.714 nvme0n1: ios=50/512, merge=0/0, ticks=1737/95, in_queue=1832, util=98.40% 00:10:27.715 nvme0n2: ios=945/1024, merge=0/0, ticks=997/154, in_queue=1151, util=98.88% 00:10:27.715 nvme0n3: ios=1058/1536, merge=0/0, ticks=1521/247, in_queue=1768, util=98.44% 00:10:27.715 nvme0n4: ios=1032/1536, merge=0/0, ticks=503/259, in_queue=762, util=89.71% 00:10:27.715 12:18:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:27.715 [global] 00:10:27.715 thread=1 00:10:27.715 invalidate=1 00:10:27.715 rw=randwrite 00:10:27.715 time_based=1 00:10:27.715 runtime=1 00:10:27.715 ioengine=libaio 00:10:27.715 direct=1 00:10:27.715 bs=4096 00:10:27.715 iodepth=1 00:10:27.715 norandommap=0 00:10:27.715 numjobs=1 00:10:27.715 00:10:27.715 verify_dump=1 00:10:27.715 verify_backlog=512 00:10:27.715 verify_state_save=0 00:10:27.715 do_verify=1 00:10:27.715 verify=crc32c-intel 00:10:27.715 [job0] 00:10:27.715 filename=/dev/nvme0n1 00:10:27.715 [job1] 00:10:27.715 filename=/dev/nvme0n2 00:10:27.715 [job2] 00:10:27.715 filename=/dev/nvme0n3 00:10:27.715 [job3] 00:10:27.715 filename=/dev/nvme0n4 00:10:27.715 Could not set queue depth (nvme0n1) 00:10:27.715 Could not set queue depth (nvme0n2) 00:10:27.715 Could not set queue depth (nvme0n3) 00:10:27.715 Could not set queue depth (nvme0n4) 00:10:27.973 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.973 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.973 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.973 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.973 fio-3.35 00:10:27.973 Starting 4 threads 00:10:29.349 00:10:29.349 job0: (groupid=0, jobs=1): err= 0: pid=1519448: Tue Dec 10 12:18:51 2024 00:10:29.349 read: IOPS=49, BW=198KiB/s (202kB/s)(204KiB/1032msec) 00:10:29.349 slat (nsec): min=6995, max=23020, avg=12033.67, stdev=6312.56 00:10:29.349 clat (usec): min=210, max=41979, avg=17852.87, stdev=20403.88 00:10:29.349 lat (usec): min=217, max=41988, avg=17864.91, stdev=20405.08 00:10:29.349 clat percentiles (usec): 00:10:29.349 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 231], 20.00th=[ 239], 00:10:29.349 | 30.00th=[ 249], 40.00th=[ 269], 50.00th=[ 297], 60.00th=[40633], 00:10:29.349 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:29.349 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:29.349 | 99.99th=[42206] 00:10:29.349 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:10:29.349 slat (nsec): min=9085, max=40349, avg=10922.85, stdev=3489.58 00:10:29.349 clat (usec): min=146, max=401, avg=223.10, stdev=29.01 00:10:29.349 lat (usec): min=156, max=436, avg=234.03, stdev=29.42 00:10:29.349 clat percentiles (usec): 00:10:29.349 | 1.00th=[ 157], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 198], 00:10:29.349 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 229], 00:10:29.349 | 70.00th=[ 239], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 265], 00:10:29.349 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 404], 99.95th=[ 404], 00:10:29.349 | 99.99th=[ 404] 00:10:29.349 bw ( KiB/s): min= 4096, max= 4096, per=18.53%, avg=4096.00, stdev= 0.00, samples=1 00:10:29.349 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:29.349 lat (usec) : 250=74.42%, 500=21.67% 00:10:29.349 lat (msec) : 50=3.91% 00:10:29.349 cpu : usr=0.19%, sys=0.58%, ctx=563, majf=0, minf=1 00:10:29.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.349 issued rwts: total=51,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.349 job1: (groupid=0, jobs=1): err= 0: pid=1519449: Tue Dec 10 12:18:51 2024 00:10:29.349 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:29.349 slat (nsec): min=6548, max=36848, avg=8029.78, stdev=1913.04 00:10:29.349 clat (usec): min=166, max=42161, avg=451.63, stdev=2957.26 00:10:29.349 lat (usec): min=173, max=42169, avg=459.66, stdev=2957.40 00:10:29.349 clat percentiles (usec): 00:10:29.349 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 221], 00:10:29.349 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:10:29.349 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:10:29.349 | 99.00th=[ 338], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:10:29.349 | 99.99th=[42206] 00:10:29.349 write: IOPS=1643, BW=6573KiB/s (6731kB/s)(6580KiB/1001msec); 0 zone resets 00:10:29.349 slat (nsec): min=9747, max=42975, avg=10867.57, stdev=1889.02 00:10:29.349 clat (usec): min=107, max=2337, avg=162.04, stdev=62.98 00:10:29.349 lat (usec): min=118, max=2353, avg=172.91, stdev=63.29 00:10:29.349 clat percentiles (usec): 00:10:29.349 | 1.00th=[ 120], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 135], 00:10:29.349 | 30.00th=[ 139], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 159], 00:10:29.349 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 239], 00:10:29.349 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 338], 99.95th=[ 2343], 00:10:29.349 | 99.99th=[ 2343] 00:10:29.349 bw ( KiB/s): min= 4600, max= 4600, per=20.81%, avg=4600.00, stdev= 0.00, samples=1 00:10:29.349 iops : min= 1150, max= 1150, avg=1150.00, stdev= 0.00, samples=1 00:10:29.349 lat (usec) : 250=85.60%, 500=14.12% 00:10:29.349 lat (msec) : 4=0.03%, 50=0.25% 00:10:29.349 cpu : usr=1.60%, sys=4.40%, ctx=3181, majf=0, minf=1 00:10:29.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.349 issued rwts: total=1536,1645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.349 job2: (groupid=0, jobs=1): err= 0: pid=1519451: Tue Dec 10 12:18:51 2024 00:10:29.349 read: IOPS=1033, BW=4136KiB/s (4235kB/s)(4140KiB/1001msec) 00:10:29.349 slat (nsec): min=6723, max=26330, avg=7801.01, stdev=1878.85 00:10:29.349 clat (usec): min=202, max=42241, avg=688.07, stdev=4208.79 00:10:29.349 lat (usec): min=209, max=42249, avg=695.87, stdev=4208.98 00:10:29.349 clat percentiles (usec): 00:10:29.349 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:10:29.349 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 251], 00:10:29.349 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 269], 95.00th=[ 285], 00:10:29.349 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:29.349 | 99.99th=[42206] 00:10:29.349 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:29.349 slat (nsec): min=9055, max=40653, avg=10150.52, stdev=1575.88 00:10:29.349 clat (usec): min=120, max=3670, avg=168.83, stdev=94.35 00:10:29.349 lat (usec): min=130, max=3697, avg=178.98, stdev=94.92 00:10:29.350 clat percentiles (usec): 00:10:29.350 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:10:29.350 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 165], 00:10:29.350 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 237], 00:10:29.350 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 519], 99.95th=[ 3687], 00:10:29.350 | 99.99th=[ 3687] 00:10:29.350 bw ( KiB/s): min= 8192, max= 8192, per=37.06%, avg=8192.00, stdev= 0.00, samples=1 00:10:29.350 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:29.350 lat (usec) : 250=80.82%, 500=18.63%, 750=0.08% 00:10:29.350 lat (msec) : 4=0.04%, 50=0.43% 00:10:29.350 cpu : usr=1.50%, sys=2.10%, ctx=2571, majf=0, minf=1 00:10:29.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.350 issued rwts: total=1035,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.350 job3: (groupid=0, jobs=1): err= 0: pid=1519453: Tue Dec 10 12:18:51 2024 00:10:29.350 read: IOPS=1484, BW=5936KiB/s (6079kB/s)(6168KiB/1039msec) 00:10:29.350 slat (nsec): min=6995, max=23274, avg=8386.63, stdev=1446.21 00:10:29.350 clat (usec): min=195, max=41064, avg=402.01, stdev=2537.16 00:10:29.350 lat (usec): min=203, max=41086, avg=410.39, stdev=2537.88 00:10:29.350 clat percentiles (usec): 00:10:29.350 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:10:29.350 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:10:29.350 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 273], 00:10:29.350 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[41157], 99.95th=[41157], 00:10:29.350 | 99.99th=[41157] 00:10:29.350 write: IOPS=1971, BW=7885KiB/s (8074kB/s)(8192KiB/1039msec); 0 zone resets 00:10:29.350 slat (nsec): min=9975, max=38444, avg=11258.17, stdev=1808.33 00:10:29.350 clat (usec): min=125, max=610, avg=181.83, stdev=28.48 00:10:29.350 lat (usec): min=136, max=622, avg=193.09, stdev=28.80 00:10:29.350 clat percentiles (usec): 00:10:29.350 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 159], 00:10:29.350 | 30.00th=[ 165], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 184], 00:10:29.350 | 70.00th=[ 190], 80.00th=[ 202], 90.00th=[ 221], 95.00th=[ 233], 00:10:29.350 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 379], 99.95th=[ 449], 00:10:29.350 | 99.99th=[ 611] 00:10:29.350 bw ( KiB/s): min= 6992, max= 9392, per=37.06%, avg=8192.00, stdev=1697.06, samples=2 00:10:29.350 iops : min= 1748, max= 2348, avg=2048.00, stdev=424.26, samples=2 00:10:29.350 lat (usec) : 250=85.18%, 500=14.62%, 750=0.03% 00:10:29.350 lat (msec) : 50=0.17% 00:10:29.350 cpu : usr=2.60%, sys=5.78%, ctx=3590, majf=0, minf=1 00:10:29.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.350 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.350 00:10:29.350 Run status group 0 (all jobs): 00:10:29.350 READ: bw=15.7MiB/s (16.4MB/s), 198KiB/s-6138KiB/s (202kB/s-6285kB/s), io=16.3MiB (17.1MB), run=1001-1039msec 00:10:29.350 WRITE: bw=21.6MiB/s (22.6MB/s), 1984KiB/s-7885KiB/s (2032kB/s-8074kB/s), io=22.4MiB (23.5MB), run=1001-1039msec 00:10:29.350 00:10:29.350 Disk stats (read/write): 00:10:29.350 nvme0n1: ios=82/512, merge=0/0, ticks=733/112, in_queue=845, util=87.78% 00:10:29.350 nvme0n2: ios=1048/1494, merge=0/0, ticks=630/228, in_queue=858, util=91.59% 00:10:29.350 nvme0n3: ios=1035/1024, merge=0/0, ticks=667/164, in_queue=831, util=90.97% 00:10:29.350 nvme0n4: ios=1583/2048, merge=0/0, ticks=461/342, in_queue=803, util=91.00% 00:10:29.350 12:18:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:29.350 [global] 00:10:29.350 thread=1 00:10:29.350 invalidate=1 00:10:29.350 rw=write 00:10:29.350 time_based=1 00:10:29.350 runtime=1 00:10:29.350 ioengine=libaio 00:10:29.350 direct=1 00:10:29.350 bs=4096 00:10:29.350 iodepth=128 00:10:29.350 norandommap=0 00:10:29.350 numjobs=1 00:10:29.350 00:10:29.350 verify_dump=1 00:10:29.350 verify_backlog=512 00:10:29.350 verify_state_save=0 00:10:29.350 do_verify=1 00:10:29.350 verify=crc32c-intel 00:10:29.350 [job0] 00:10:29.350 filename=/dev/nvme0n1 00:10:29.350 [job1] 00:10:29.350 filename=/dev/nvme0n2 00:10:29.350 [job2] 00:10:29.350 filename=/dev/nvme0n3 00:10:29.350 [job3] 00:10:29.350 filename=/dev/nvme0n4 00:10:29.350 Could not set queue depth (nvme0n1) 00:10:29.350 Could not set queue depth (nvme0n2) 00:10:29.350 Could not set queue depth (nvme0n3) 00:10:29.350 Could not set queue depth (nvme0n4) 00:10:29.350 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.350 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.350 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.350 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.350 fio-3.35 00:10:29.350 Starting 4 threads 00:10:30.727 00:10:30.727 job0: (groupid=0, jobs=1): err= 0: pid=1519843: Tue Dec 10 12:18:52 2024 00:10:30.727 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:10:30.727 slat (nsec): min=1096, max=15462k, avg=102100.80, stdev=769834.93 00:10:30.727 clat (usec): min=4367, max=52756, avg=12689.16, stdev=6018.98 00:10:30.727 lat (usec): min=4373, max=52765, avg=12791.26, stdev=6100.56 00:10:30.727 clat percentiles (usec): 00:10:30.727 | 1.00th=[ 6194], 5.00th=[ 7832], 10.00th=[ 8291], 20.00th=[ 8848], 00:10:30.727 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[11076], 60.00th=[11469], 00:10:30.727 | 70.00th=[12649], 80.00th=[14091], 90.00th=[20579], 95.00th=[23200], 00:10:30.727 | 99.00th=[38011], 99.50th=[45351], 99.90th=[52167], 99.95th=[52691], 00:10:30.727 | 99.99th=[52691] 00:10:30.727 write: IOPS=4534, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1008msec); 0 zone resets 00:10:30.727 slat (usec): min=2, max=22075, avg=118.42, stdev=744.30 00:10:30.727 clat (usec): min=3956, max=75005, avg=16541.86, stdev=13144.50 00:10:30.727 lat (usec): min=5528, max=75009, avg=16660.28, stdev=13233.83 00:10:30.727 clat percentiles (usec): 00:10:30.727 | 1.00th=[ 6915], 5.00th=[ 7832], 10.00th=[ 9110], 20.00th=[10159], 00:10:30.727 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[11469], 00:10:30.727 | 70.00th=[13566], 80.00th=[20579], 90.00th=[38536], 95.00th=[47449], 00:10:30.727 | 99.00th=[68682], 99.50th=[69731], 99.90th=[74974], 99.95th=[74974], 00:10:30.727 | 99.99th=[74974] 00:10:30.727 bw ( KiB/s): min=11640, max=23904, per=25.42%, avg=17772.00, stdev=8671.96, samples=2 00:10:30.727 iops : min= 2910, max= 5976, avg=4443.00, stdev=2167.99, samples=2 00:10:30.727 lat (msec) : 4=0.01%, 10=23.54%, 20=60.44%, 50=13.58%, 100=2.43% 00:10:30.727 cpu : usr=3.57%, sys=4.87%, ctx=394, majf=0, minf=1 00:10:30.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:30.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.727 issued rwts: total=4096,4571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.727 job1: (groupid=0, jobs=1): err= 0: pid=1519864: Tue Dec 10 12:18:52 2024 00:10:30.727 read: IOPS=4844, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1003msec) 00:10:30.727 slat (nsec): min=1308, max=18836k, avg=103881.36, stdev=823696.20 00:10:30.727 clat (usec): min=843, max=40414, avg=13457.19, stdev=5739.19 00:10:30.727 lat (usec): min=4845, max=40441, avg=13561.07, stdev=5796.08 00:10:30.727 clat percentiles (usec): 00:10:30.727 | 1.00th=[ 5145], 5.00th=[ 7439], 10.00th=[ 8979], 20.00th=[ 9896], 00:10:30.727 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11600], 60.00th=[12256], 00:10:30.727 | 70.00th=[13960], 80.00th=[16319], 90.00th=[21627], 95.00th=[23725], 00:10:30.727 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:10:30.727 | 99.99th=[40633] 00:10:30.727 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:30.727 slat (nsec): min=1848, max=19642k, avg=86006.35, stdev=657700.40 00:10:30.727 clat (usec): min=538, max=40345, avg=12077.72, stdev=4230.22 00:10:30.727 lat (usec): min=545, max=40366, avg=12163.72, stdev=4301.40 00:10:30.727 clat percentiles (usec): 00:10:30.727 | 1.00th=[ 4490], 5.00th=[ 6783], 10.00th=[ 8455], 20.00th=[ 9241], 00:10:30.727 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[11207], 00:10:30.727 | 70.00th=[12125], 80.00th=[14615], 90.00th=[20055], 95.00th=[20841], 00:10:30.727 | 99.00th=[22152], 99.50th=[22414], 99.90th=[33424], 99.95th=[39060], 00:10:30.727 | 99.99th=[40109] 00:10:30.727 bw ( KiB/s): min=16384, max=24576, per=29.29%, avg=20480.00, stdev=5792.62, samples=2 00:10:30.727 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:10:30.727 lat (usec) : 750=0.03%, 1000=0.01% 00:10:30.727 lat (msec) : 4=0.06%, 10=25.17%, 20=62.54%, 50=12.19% 00:10:30.727 cpu : usr=3.19%, sys=5.49%, ctx=407, majf=0, minf=1 00:10:30.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:30.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.727 issued rwts: total=4859,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.727 job2: (groupid=0, jobs=1): err= 0: pid=1519897: Tue Dec 10 12:18:52 2024 00:10:30.727 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:10:30.727 slat (nsec): min=1172, max=29579k, avg=135458.11, stdev=1101245.11 00:10:30.727 clat (usec): min=4181, max=71852, avg=17943.59, stdev=11562.98 00:10:30.727 lat (usec): min=4190, max=71872, avg=18079.05, stdev=11647.40 00:10:30.727 clat percentiles (usec): 00:10:30.727 | 1.00th=[ 6915], 5.00th=[10159], 10.00th=[10814], 20.00th=[11338], 00:10:30.727 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12780], 60.00th=[13829], 00:10:30.727 | 70.00th=[16450], 80.00th=[18482], 90.00th=[40109], 95.00th=[49546], 00:10:30.727 | 99.00th=[53216], 99.50th=[53216], 99.90th=[57934], 99.95th=[71828], 00:10:30.727 | 99.99th=[71828] 00:10:30.727 write: IOPS=3816, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1004msec); 0 zone resets 00:10:30.727 slat (nsec): min=1891, max=30677k, avg=126559.42, stdev=881562.55 00:10:30.727 clat (usec): min=304, max=56977, avg=16398.59, stdev=10815.10 00:10:30.727 lat (usec): min=1063, max=63047, avg=16525.15, stdev=10904.23 00:10:30.727 clat percentiles (usec): 00:10:30.727 | 1.00th=[ 3982], 5.00th=[ 7046], 10.00th=[ 8848], 20.00th=[10421], 00:10:30.727 | 30.00th=[11207], 40.00th=[11863], 50.00th=[11994], 60.00th=[12518], 00:10:30.727 | 70.00th=[13304], 80.00th=[21627], 90.00th=[34866], 95.00th=[40633], 00:10:30.727 | 99.00th=[56886], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:10:30.727 | 99.99th=[56886] 00:10:30.727 bw ( KiB/s): min=13136, max=16496, per=21.19%, avg=14816.00, stdev=2375.88, samples=2 00:10:30.727 iops : min= 3284, max= 4124, avg=3704.00, stdev=593.97, samples=2 00:10:30.727 lat (usec) : 500=0.01% 00:10:30.727 lat (msec) : 2=0.03%, 4=0.54%, 10=10.21%, 20=68.54%, 50=17.97% 00:10:30.727 lat (msec) : 100=2.70% 00:10:30.727 cpu : usr=1.99%, sys=4.49%, ctx=381, majf=0, minf=2 00:10:30.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:30.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.727 issued rwts: total=3584,3832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.727 job3: (groupid=0, jobs=1): err= 0: pid=1519909: Tue Dec 10 12:18:52 2024 00:10:30.727 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:10:30.727 slat (nsec): min=1116, max=20433k, avg=118847.78, stdev=920427.16 00:10:30.727 clat (usec): min=456, max=54737, avg=16349.86, stdev=9250.18 00:10:30.727 lat (usec): min=464, max=54764, avg=16468.71, stdev=9313.95 00:10:30.727 clat percentiles (usec): 00:10:30.727 | 1.00th=[ 1156], 5.00th=[ 1876], 10.00th=[ 6652], 20.00th=[10814], 00:10:30.727 | 30.00th=[11207], 40.00th=[11731], 50.00th=[13435], 60.00th=[16712], 00:10:30.727 | 70.00th=[18482], 80.00th=[22152], 90.00th=[29492], 95.00th=[36963], 00:10:30.728 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[52167], 00:10:30.728 | 99.99th=[54789] 00:10:30.728 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:10:30.728 slat (nsec): min=1954, max=25082k, avg=130676.94, stdev=958068.46 00:10:30.728 clat (usec): min=567, max=53640, avg=16916.04, stdev=9136.34 00:10:30.728 lat (usec): min=3051, max=53650, avg=17046.72, stdev=9213.48 00:10:30.728 clat percentiles (usec): 00:10:30.728 | 1.00th=[ 4555], 5.00th=[ 7177], 10.00th=[ 9634], 20.00th=[10945], 00:10:30.728 | 30.00th=[11338], 40.00th=[12125], 50.00th=[13435], 60.00th=[16581], 00:10:30.728 | 70.00th=[20317], 80.00th=[21365], 90.00th=[29230], 95.00th=[39584], 00:10:30.728 | 99.00th=[49546], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:10:30.728 | 99.99th=[53740] 00:10:30.728 bw ( KiB/s): min=15360, max=16384, per=22.70%, avg=15872.00, stdev=724.08, samples=2 00:10:30.728 iops : min= 3840, max= 4096, avg=3968.00, stdev=181.02, samples=2 00:10:30.728 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.26% 00:10:30.728 lat (msec) : 2=3.07%, 4=0.08%, 10=10.01%, 20=56.76%, 50=29.24% 00:10:30.728 lat (msec) : 100=0.53% 00:10:30.728 cpu : usr=3.38%, sys=3.97%, ctx=413, majf=0, minf=1 00:10:30.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:30.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.728 issued rwts: total=3584,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.728 00:10:30.728 Run status group 0 (all jobs): 00:10:30.728 READ: bw=62.5MiB/s (65.5MB/s), 13.9MiB/s-18.9MiB/s (14.6MB/s-19.8MB/s), io=63.0MiB (66.0MB), run=1003-1008msec 00:10:30.728 WRITE: bw=68.3MiB/s (71.6MB/s), 14.9MiB/s-19.9MiB/s (15.6MB/s-20.9MB/s), io=68.8MiB (72.2MB), run=1003-1008msec 00:10:30.728 00:10:30.728 Disk stats (read/write): 00:10:30.728 nvme0n1: ios=3633/3902, merge=0/0, ticks=27941/43320, in_queue=71261, util=82.46% 00:10:30.728 nvme0n2: ios=3634/3903, merge=0/0, ticks=39572/37688, in_queue=77260, util=86.33% 00:10:30.728 nvme0n3: ios=2617/3023, merge=0/0, ticks=34311/39385, in_queue=73696, util=91.23% 00:10:30.728 nvme0n4: ios=3129/3319, merge=0/0, ticks=32555/36295, in_queue=68850, util=96.80% 00:10:30.728 12:18:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:30.728 [global] 00:10:30.728 thread=1 00:10:30.728 invalidate=1 00:10:30.728 rw=randwrite 00:10:30.728 time_based=1 00:10:30.728 runtime=1 00:10:30.728 ioengine=libaio 00:10:30.728 direct=1 00:10:30.728 bs=4096 00:10:30.728 iodepth=128 00:10:30.728 norandommap=0 00:10:30.728 numjobs=1 00:10:30.728 00:10:30.728 verify_dump=1 00:10:30.728 verify_backlog=512 00:10:30.728 verify_state_save=0 00:10:30.728 do_verify=1 00:10:30.728 verify=crc32c-intel 00:10:30.728 [job0] 00:10:30.728 filename=/dev/nvme0n1 00:10:30.728 [job1] 00:10:30.728 filename=/dev/nvme0n2 00:10:30.728 [job2] 00:10:30.728 filename=/dev/nvme0n3 00:10:30.728 [job3] 00:10:30.728 filename=/dev/nvme0n4 00:10:30.728 Could not set queue depth (nvme0n1) 00:10:30.728 Could not set queue depth (nvme0n2) 00:10:30.728 Could not set queue depth (nvme0n3) 00:10:30.728 Could not set queue depth (nvme0n4) 00:10:30.986 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.986 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.986 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.986 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:30.986 fio-3.35 00:10:30.986 Starting 4 threads 00:10:32.362 00:10:32.362 job0: (groupid=0, jobs=1): err= 0: pid=1520328: Tue Dec 10 12:18:54 2024 00:10:32.362 read: IOPS=1871, BW=7485KiB/s (7664kB/s)(7552KiB/1009msec) 00:10:32.362 slat (usec): min=3, max=50564, avg=344.28, stdev=2615.76 00:10:32.362 clat (usec): min=1429, max=135467, avg=39548.97, stdev=27133.88 00:10:32.362 lat (msec): min=17, max=135, avg=39.89, stdev=27.22 00:10:32.362 clat percentiles (msec): 00:10:32.362 | 1.00th=[ 18], 5.00th=[ 19], 10.00th=[ 19], 20.00th=[ 21], 00:10:32.362 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 26], 60.00th=[ 35], 00:10:32.362 | 70.00th=[ 46], 80.00th=[ 59], 90.00th=[ 85], 95.00th=[ 102], 00:10:32.362 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:10:32.362 | 99.99th=[ 136] 00:10:32.362 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:10:32.362 slat (usec): min=5, max=28427, avg=168.17, stdev=1222.42 00:10:32.362 clat (usec): min=9722, max=99422, avg=23607.54, stdev=16557.35 00:10:32.362 lat (usec): min=9731, max=99443, avg=23775.71, stdev=16592.73 00:10:32.362 clat percentiles (usec): 00:10:32.362 | 1.00th=[10552], 5.00th=[13042], 10.00th=[13173], 20.00th=[14746], 00:10:32.362 | 30.00th=[15926], 40.00th=[17433], 50.00th=[18220], 60.00th=[18482], 00:10:32.362 | 70.00th=[20317], 80.00th=[23987], 90.00th=[45351], 95.00th=[62653], 00:10:32.362 | 99.00th=[93848], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:10:32.362 | 99.99th=[99091] 00:10:32.362 bw ( KiB/s): min= 8192, max= 8192, per=11.93%, avg=8192.00, stdev= 0.00, samples=2 00:10:32.362 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:32.362 lat (msec) : 2=0.03%, 10=0.15%, 20=43.32%, 50=38.80%, 100=14.53% 00:10:32.362 lat (msec) : 250=3.18% 00:10:32.362 cpu : usr=1.29%, sys=2.58%, ctx=125, majf=0, minf=1 00:10:32.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:32.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.362 issued rwts: total=1888,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.362 job1: (groupid=0, jobs=1): err= 0: pid=1520343: Tue Dec 10 12:18:54 2024 00:10:32.362 read: IOPS=6098, BW=23.8MiB/s (25.0MB/s)(23.9MiB/1005msec) 00:10:32.362 slat (nsec): min=1132, max=20526k, avg=78261.04, stdev=601461.75 00:10:32.362 clat (usec): min=1888, max=35304, avg=10831.66, stdev=4151.00 00:10:32.362 lat (usec): min=2475, max=35308, avg=10909.92, stdev=4174.53 00:10:32.362 clat percentiles (usec): 00:10:32.362 | 1.00th=[ 3032], 5.00th=[ 4555], 10.00th=[ 6980], 20.00th=[ 9503], 00:10:32.362 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:10:32.362 | 70.00th=[10683], 80.00th=[11338], 90.00th=[14484], 95.00th=[20317], 00:10:32.362 | 99.00th=[31851], 99.50th=[33817], 99.90th=[35390], 99.95th=[35390], 00:10:32.362 | 99.99th=[35390] 00:10:32.362 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:10:32.362 slat (usec): min=2, max=18190, avg=72.95, stdev=495.78 00:10:32.362 clat (usec): min=442, max=49123, avg=9878.53, stdev=4323.33 00:10:32.362 lat (usec): min=452, max=49126, avg=9951.48, stdev=4348.28 00:10:32.362 clat percentiles (usec): 00:10:32.362 | 1.00th=[ 2212], 5.00th=[ 4883], 10.00th=[ 7242], 20.00th=[ 7963], 00:10:32.362 | 30.00th=[ 8848], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:10:32.362 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10945], 95.00th=[12518], 00:10:32.362 | 99.00th=[35914], 99.50th=[41681], 99.90th=[48497], 99.95th=[48497], 00:10:32.362 | 99.99th=[49021] 00:10:32.362 bw ( KiB/s): min=24576, max=24576, per=35.80%, avg=24576.00, stdev= 0.00, samples=2 00:10:32.362 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:10:32.362 lat (usec) : 500=0.03%, 750=0.03%, 1000=0.11% 00:10:32.362 lat (msec) : 2=0.15%, 4=3.22%, 10=36.16%, 20=56.77%, 50=3.54% 00:10:32.362 cpu : usr=3.88%, sys=6.97%, ctx=509, majf=0, minf=1 00:10:32.362 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:32.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.362 issued rwts: total=6129,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.362 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.362 job2: (groupid=0, jobs=1): err= 0: pid=1520363: Tue Dec 10 12:18:54 2024 00:10:32.362 read: IOPS=4679, BW=18.3MiB/s (19.2MB/s)(18.5MiB/1010msec) 00:10:32.362 slat (nsec): min=1404, max=12359k, avg=112585.15, stdev=784049.78 00:10:32.362 clat (usec): min=4062, max=62453, avg=13215.61, stdev=5201.10 00:10:32.362 lat (usec): min=4069, max=62462, avg=13328.20, stdev=5271.24 00:10:32.362 clat percentiles (usec): 00:10:32.362 | 1.00th=[ 4883], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[10945], 00:10:32.362 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[12125], 00:10:32.362 | 70.00th=[13435], 80.00th=[15008], 90.00th=[18482], 95.00th=[20579], 00:10:32.362 | 99.00th=[31065], 99.50th=[54789], 99.90th=[62653], 99.95th=[62653], 00:10:32.362 | 99.99th=[62653] 00:10:32.362 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:10:32.363 slat (usec): min=2, max=8978, avg=85.91, stdev=449.36 00:10:32.363 clat (usec): min=2688, max=62416, avg=12806.55, stdev=7382.84 00:10:32.363 lat (usec): min=2699, max=62419, avg=12892.46, stdev=7411.63 00:10:32.363 clat percentiles (usec): 00:10:32.363 | 1.00th=[ 3458], 5.00th=[ 5735], 10.00th=[ 8029], 20.00th=[10290], 00:10:32.363 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:10:32.363 | 70.00th=[11863], 80.00th=[11994], 90.00th=[18220], 95.00th=[30016], 00:10:32.363 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:10:32.363 | 99.99th=[62653] 00:10:32.363 bw ( KiB/s): min=20408, max=20472, per=29.78%, avg=20440.00, stdev=45.25, samples=2 00:10:32.363 iops : min= 5102, max= 5118, avg=5110.00, stdev=11.31, samples=2 00:10:32.363 lat (msec) : 4=0.93%, 10=11.94%, 20=79.47%, 50=6.68%, 100=0.96% 00:10:32.363 cpu : usr=3.47%, sys=6.05%, ctx=600, majf=0, minf=1 00:10:32.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:32.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.363 issued rwts: total=4726,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.363 job3: (groupid=0, jobs=1): err= 0: pid=1520370: Tue Dec 10 12:18:54 2024 00:10:32.363 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:10:32.363 slat (nsec): min=1406, max=14740k, avg=126158.00, stdev=889745.33 00:10:32.363 clat (usec): min=4943, max=39040, avg=14937.19, stdev=5595.58 00:10:32.363 lat (usec): min=5794, max=39050, avg=15063.35, stdev=5665.14 00:10:32.363 clat percentiles (usec): 00:10:32.363 | 1.00th=[ 8094], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[11338], 00:10:32.363 | 30.00th=[11994], 40.00th=[12911], 50.00th=[13435], 60.00th=[14615], 00:10:32.363 | 70.00th=[15270], 80.00th=[16319], 90.00th=[22152], 95.00th=[28705], 00:10:32.363 | 99.00th=[35914], 99.50th=[36963], 99.90th=[39060], 99.95th=[39060], 00:10:32.363 | 99.99th=[39060] 00:10:32.363 write: IOPS=3993, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1007msec); 0 zone resets 00:10:32.363 slat (usec): min=2, max=12914, avg=131.29, stdev=703.98 00:10:32.363 clat (usec): min=1934, max=40725, avg=18442.31, stdev=8123.91 00:10:32.363 lat (usec): min=3579, max=40730, avg=18573.60, stdev=8187.43 00:10:32.363 clat percentiles (usec): 00:10:32.363 | 1.00th=[ 4817], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[11600], 00:10:32.363 | 30.00th=[12387], 40.00th=[13698], 50.00th=[15926], 60.00th=[19268], 00:10:32.363 | 70.00th=[22676], 80.00th=[25560], 90.00th=[31065], 95.00th=[34341], 00:10:32.363 | 99.00th=[37487], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:10:32.363 | 99.99th=[40633] 00:10:32.363 bw ( KiB/s): min=14768, max=16376, per=22.68%, avg=15572.00, stdev=1137.03, samples=2 00:10:32.363 iops : min= 3692, max= 4094, avg=3893.00, stdev=284.26, samples=2 00:10:32.363 lat (msec) : 2=0.01%, 4=0.08%, 10=7.19%, 20=67.13%, 50=25.59% 00:10:32.363 cpu : usr=2.98%, sys=5.37%, ctx=396, majf=0, minf=1 00:10:32.363 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:32.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.363 issued rwts: total=3584,4021,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.363 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.363 00:10:32.363 Run status group 0 (all jobs): 00:10:32.363 READ: bw=63.1MiB/s (66.2MB/s), 7485KiB/s-23.8MiB/s (7664kB/s-25.0MB/s), io=63.8MiB (66.9MB), run=1005-1010msec 00:10:32.363 WRITE: bw=67.0MiB/s (70.3MB/s), 8119KiB/s-23.9MiB/s (8314kB/s-25.0MB/s), io=67.7MiB (71.0MB), run=1005-1010msec 00:10:32.363 00:10:32.363 Disk stats (read/write): 00:10:32.363 nvme0n1: ios=1684/2048, merge=0/0, ticks=15979/11534, in_queue=27513, util=98.20% 00:10:32.363 nvme0n2: ios=5137/5295, merge=0/0, ticks=34049/32576, in_queue=66625, util=96.45% 00:10:32.363 nvme0n3: ios=4011/4096, merge=0/0, ticks=52239/53455, in_queue=105694, util=98.23% 00:10:32.363 nvme0n4: ios=3023/3072, merge=0/0, ticks=44298/61052, in_queue=105350, util=99.37% 00:10:32.363 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:32.363 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1520477 00:10:32.363 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:32.363 12:18:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:32.363 [global] 00:10:32.363 thread=1 00:10:32.363 invalidate=1 00:10:32.363 rw=read 00:10:32.363 time_based=1 00:10:32.363 runtime=10 00:10:32.363 ioengine=libaio 00:10:32.363 direct=1 00:10:32.363 bs=4096 00:10:32.363 iodepth=1 00:10:32.363 norandommap=1 00:10:32.363 numjobs=1 00:10:32.363 00:10:32.363 [job0] 00:10:32.363 filename=/dev/nvme0n1 00:10:32.363 [job1] 00:10:32.363 filename=/dev/nvme0n2 00:10:32.363 [job2] 00:10:32.363 filename=/dev/nvme0n3 00:10:32.363 [job3] 00:10:32.363 filename=/dev/nvme0n4 00:10:32.363 Could not set queue depth (nvme0n1) 00:10:32.363 Could not set queue depth (nvme0n2) 00:10:32.363 Could not set queue depth (nvme0n3) 00:10:32.363 Could not set queue depth (nvme0n4) 00:10:32.622 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.622 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.622 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.622 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:32.622 fio-3.35 00:10:32.622 Starting 4 threads 00:10:35.903 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:35.903 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:35.903 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=446464, buflen=4096 00:10:35.903 fio: pid=1520791, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:35.903 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=3645440, buflen=4096 00:10:35.903 fio: pid=1520790, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:35.903 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.903 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:35.903 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=319488, buflen=4096 00:10:35.903 fio: pid=1520788, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:35.903 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.903 12:18:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:36.162 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.162 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:36.162 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=331776, buflen=4096 00:10:36.162 fio: pid=1520789, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:36.162 00:10:36.162 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1520788: Tue Dec 10 12:18:58 2024 00:10:36.162 read: IOPS=25, BW=102KiB/s (104kB/s)(312KiB/3071msec) 00:10:36.162 slat (nsec): min=10121, max=62983, avg=21898.95, stdev=5476.45 00:10:36.162 clat (usec): min=393, max=42080, avg=39076.55, stdev=9057.93 00:10:36.162 lat (usec): min=416, max=42101, avg=39098.47, stdev=9056.90 00:10:36.162 clat percentiles (usec): 00:10:36.162 | 1.00th=[ 396], 5.00th=[ 429], 10.00th=[40633], 20.00th=[41157], 00:10:36.162 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:36.162 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:36.162 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:36.162 | 99.99th=[42206] 00:10:36.162 bw ( KiB/s): min= 96, max= 112, per=7.26%, avg=102.17, stdev= 6.01, samples=6 00:10:36.162 iops : min= 24, max= 28, avg=25.50, stdev= 1.52, samples=6 00:10:36.162 lat (usec) : 500=5.06% 00:10:36.162 lat (msec) : 50=93.67% 00:10:36.162 cpu : usr=0.10%, sys=0.00%, ctx=80, majf=0, minf=1 00:10:36.162 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.162 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.162 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.162 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.162 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1520789: Tue Dec 10 12:18:58 2024 00:10:36.162 read: IOPS=24, BW=98.2KiB/s (101kB/s)(324KiB/3299msec) 00:10:36.162 slat (usec): min=3, max=22698, avg=403.12, stdev=2718.13 00:10:36.162 clat (usec): min=219, max=42016, avg=40058.84, stdev=6361.04 00:10:36.162 lat (usec): min=225, max=63818, avg=40466.77, stdev=6987.21 00:10:36.162 clat percentiles (usec): 00:10:36.163 | 1.00th=[ 221], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:36.163 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:36.163 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:36.163 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:36.163 | 99.99th=[42206] 00:10:36.163 bw ( KiB/s): min= 96, max= 108, per=7.05%, avg=99.33, stdev= 5.32, samples=6 00:10:36.163 iops : min= 24, max= 27, avg=24.83, stdev= 1.33, samples=6 00:10:36.163 lat (usec) : 250=1.22%, 500=1.22% 00:10:36.163 lat (msec) : 50=96.34% 00:10:36.163 cpu : usr=0.03%, sys=0.00%, ctx=88, majf=0, minf=1 00:10:36.163 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.163 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.163 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.163 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.163 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1520790: Tue Dec 10 12:18:58 2024 00:10:36.163 read: IOPS=310, BW=1240KiB/s (1270kB/s)(3560KiB/2871msec) 00:10:36.163 slat (usec): min=7, max=14720, avg=25.90, stdev=492.86 00:10:36.163 clat (usec): min=170, max=41973, avg=3174.22, stdev=10531.02 00:10:36.163 lat (usec): min=177, max=55840, avg=3200.12, stdev=10605.60 00:10:36.163 clat percentiles (usec): 00:10:36.163 | 1.00th=[ 186], 5.00th=[ 206], 10.00th=[ 219], 20.00th=[ 233], 00:10:36.163 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:10:36.163 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[41157], 00:10:36.163 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:36.163 | 99.99th=[42206] 00:10:36.163 bw ( KiB/s): min= 96, max= 6664, per=100.00%, avg=1409.60, stdev=2937.30, samples=5 00:10:36.163 iops : min= 24, max= 1666, avg=352.40, stdev=734.32, samples=5 00:10:36.163 lat (usec) : 250=56.45%, 500=36.03%, 750=0.22% 00:10:36.163 lat (msec) : 50=7.18% 00:10:36.163 cpu : usr=0.21%, sys=0.52%, ctx=893, majf=0, minf=2 00:10:36.163 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.163 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.163 issued rwts: total=891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.163 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.163 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1520791: Tue Dec 10 12:18:58 2024 00:10:36.163 read: IOPS=40, BW=162KiB/s (166kB/s)(436KiB/2687msec) 00:10:36.163 slat (nsec): min=9331, max=32825, avg=18818.12, stdev=6631.16 00:10:36.163 clat (usec): min=186, max=41068, avg=24433.66, stdev=20008.40 00:10:36.163 lat (usec): min=198, max=41095, avg=24452.43, stdev=20008.06 00:10:36.163 clat percentiles (usec): 00:10:36.163 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 221], 00:10:36.163 | 30.00th=[ 239], 40.00th=[ 375], 50.00th=[40633], 60.00th=[40633], 00:10:36.163 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:36.163 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:36.163 | 99.99th=[41157] 00:10:36.163 bw ( KiB/s): min= 112, max= 248, per=11.68%, avg=164.80, stdev=60.03, samples=5 00:10:36.163 iops : min= 28, max= 62, avg=41.20, stdev=15.01, samples=5 00:10:36.163 lat (usec) : 250=35.45%, 500=4.55% 00:10:36.163 lat (msec) : 50=59.09% 00:10:36.163 cpu : usr=0.19%, sys=0.00%, ctx=111, majf=0, minf=2 00:10:36.163 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.163 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.163 issued rwts: total=110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.163 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.163 00:10:36.163 Run status group 0 (all jobs): 00:10:36.163 READ: bw=1404KiB/s (1438kB/s), 98.2KiB/s-1240KiB/s (101kB/s-1270kB/s), io=4632KiB (4743kB), run=2687-3299msec 00:10:36.163 00:10:36.163 Disk stats (read/write): 00:10:36.163 nvme0n1: ios=78/0, merge=0/0, ticks=3050/0, in_queue=3050, util=94.24% 00:10:36.163 nvme0n2: ios=104/0, merge=0/0, ticks=3746/0, in_queue=3746, util=98.60% 00:10:36.163 nvme0n3: ios=894/0, merge=0/0, ticks=2985/0, in_queue=2985, util=95.92% 00:10:36.163 nvme0n4: ios=148/0, merge=0/0, ticks=3477/0, in_queue=3477, util=98.98% 00:10:36.421 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.421 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:36.421 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.421 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:36.679 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.679 12:18:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:36.937 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.937 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1520477 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:37.196 nvmf hotplug test: fio failed as expected 00:10:37.196 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.454 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.454 rmmod nvme_tcp 00:10:37.454 rmmod nvme_fabrics 00:10:37.454 rmmod nvme_keyring 00:10:37.713 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.713 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:37.713 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:37.713 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1517725 ']' 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1517725 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1517725 ']' 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1517725 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1517725 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1517725' 00:10:37.714 killing process with pid 1517725 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1517725 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1517725 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.714 12:18:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.254 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.254 00:10:40.254 real 0m27.067s 00:10:40.254 user 1m46.688s 00:10:40.254 sys 0m8.118s 00:10:40.254 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.254 12:19:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.254 ************************************ 00:10:40.254 END TEST nvmf_fio_target 00:10:40.254 ************************************ 00:10:40.254 12:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:40.254 12:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.254 12:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.254 12:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.254 ************************************ 00:10:40.254 START TEST nvmf_bdevio 00:10:40.254 ************************************ 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:40.254 * Looking for test storage... 00:10:40.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:40.254 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.255 --rc genhtml_branch_coverage=1 00:10:40.255 --rc genhtml_function_coverage=1 00:10:40.255 --rc genhtml_legend=1 00:10:40.255 --rc geninfo_all_blocks=1 00:10:40.255 --rc geninfo_unexecuted_blocks=1 00:10:40.255 00:10:40.255 ' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.255 --rc genhtml_branch_coverage=1 00:10:40.255 --rc genhtml_function_coverage=1 00:10:40.255 --rc genhtml_legend=1 00:10:40.255 --rc geninfo_all_blocks=1 00:10:40.255 --rc geninfo_unexecuted_blocks=1 00:10:40.255 00:10:40.255 ' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.255 --rc genhtml_branch_coverage=1 00:10:40.255 --rc genhtml_function_coverage=1 00:10:40.255 --rc genhtml_legend=1 00:10:40.255 --rc geninfo_all_blocks=1 00:10:40.255 --rc geninfo_unexecuted_blocks=1 00:10:40.255 00:10:40.255 ' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.255 --rc genhtml_branch_coverage=1 00:10:40.255 --rc genhtml_function_coverage=1 00:10:40.255 --rc genhtml_legend=1 00:10:40.255 --rc geninfo_all_blocks=1 00:10:40.255 --rc geninfo_unexecuted_blocks=1 00:10:40.255 00:10:40.255 ' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.255 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.255 12:19:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.827 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:46.828 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:46.828 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:46.828 Found net devices under 0000:86:00.0: cvl_0_0 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:46.828 Found net devices under 0000:86:00.1: cvl_0_1 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.828 12:19:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:46.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:10:46.828 00:10:46.828 --- 10.0.0.2 ping statistics --- 00:10:46.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.828 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:10:46.828 00:10:46.828 --- 10.0.0.1 ping statistics --- 00:10:46.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.828 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1525048 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1525048 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1525048 ']' 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.828 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.828 [2024-12-10 12:19:08.344906] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:46.828 [2024-12-10 12:19:08.344951] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.828 [2024-12-10 12:19:08.420874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.828 [2024-12-10 12:19:08.460992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.828 [2024-12-10 12:19:08.461029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.829 [2024-12-10 12:19:08.461037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.829 [2024-12-10 12:19:08.461043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.829 [2024-12-10 12:19:08.461049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.829 [2024-12-10 12:19:08.462578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:46.829 [2024-12-10 12:19:08.462689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:46.829 [2024-12-10 12:19:08.462795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.829 [2024-12-10 12:19:08.462796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 [2024-12-10 12:19:08.611564] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 Malloc0 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.829 [2024-12-10 12:19:08.674945] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:46.829 { 00:10:46.829 "params": { 00:10:46.829 "name": "Nvme$subsystem", 00:10:46.829 "trtype": "$TEST_TRANSPORT", 00:10:46.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:46.829 "adrfam": "ipv4", 00:10:46.829 "trsvcid": "$NVMF_PORT", 00:10:46.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:46.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:46.829 "hdgst": ${hdgst:-false}, 00:10:46.829 "ddgst": ${ddgst:-false} 00:10:46.829 }, 00:10:46.829 "method": "bdev_nvme_attach_controller" 00:10:46.829 } 00:10:46.829 EOF 00:10:46.829 )") 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:46.829 12:19:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:46.829 "params": { 00:10:46.829 "name": "Nvme1", 00:10:46.829 "trtype": "tcp", 00:10:46.829 "traddr": "10.0.0.2", 00:10:46.829 "adrfam": "ipv4", 00:10:46.829 "trsvcid": "4420", 00:10:46.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:46.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:46.829 "hdgst": false, 00:10:46.829 "ddgst": false 00:10:46.829 }, 00:10:46.829 "method": "bdev_nvme_attach_controller" 00:10:46.829 }' 00:10:46.829 [2024-12-10 12:19:08.726787] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:10:46.829 [2024-12-10 12:19:08.726830] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525236 ] 00:10:46.829 [2024-12-10 12:19:08.804137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:46.829 [2024-12-10 12:19:08.847493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.829 [2024-12-10 12:19:08.847601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.829 [2024-12-10 12:19:08.847602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.087 I/O targets: 00:10:47.087 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:47.087 00:10:47.087 00:10:47.087 CUnit - A unit testing framework for C - Version 2.1-3 00:10:47.087 http://cunit.sourceforge.net/ 00:10:47.087 00:10:47.087 00:10:47.087 Suite: bdevio tests on: Nvme1n1 00:10:47.087 Test: blockdev write read block ...passed 00:10:47.087 Test: blockdev write zeroes read block ...passed 00:10:47.087 Test: blockdev write zeroes read no split ...passed 00:10:47.087 Test: blockdev write zeroes read split ...passed 00:10:47.087 Test: blockdev write zeroes read split partial ...passed 00:10:47.087 Test: blockdev reset ...[2024-12-10 12:19:09.204681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:47.087 [2024-12-10 12:19:09.204742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x802050 (9): Bad file descriptor 00:10:47.345 [2024-12-10 12:19:09.262764] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:47.345 passed 00:10:47.345 Test: blockdev write read 8 blocks ...passed 00:10:47.345 Test: blockdev write read size > 128k ...passed 00:10:47.345 Test: blockdev write read invalid size ...passed 00:10:47.345 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:47.345 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:47.345 Test: blockdev write read max offset ...passed 00:10:47.345 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:47.345 Test: blockdev writev readv 8 blocks ...passed 00:10:47.345 Test: blockdev writev readv 30 x 1block ...passed 00:10:47.345 Test: blockdev writev readv block ...passed 00:10:47.345 Test: blockdev writev readv size > 128k ...passed 00:10:47.345 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:47.345 Test: blockdev comparev and writev ...[2024-12-10 12:19:09.475042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.345 [2024-12-10 12:19:09.475072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:47.345 [2024-12-10 12:19:09.475086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.345 [2024-12-10 12:19:09.475095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:47.345 [2024-12-10 12:19:09.475352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.345 [2024-12-10 12:19:09.475364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:47.345 [2024-12-10 12:19:09.475376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.345 [2024-12-10 12:19:09.475383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:47.345 [2024-12-10 12:19:09.475625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.345 [2024-12-10 12:19:09.475637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:47.345 [2024-12-10 12:19:09.475648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.345 [2024-12-10 12:19:09.475656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:47.345 [2024-12-10 12:19:09.475891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.345 [2024-12-10 12:19:09.475902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:47.345 [2024-12-10 12:19:09.475913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:47.346 [2024-12-10 12:19:09.475920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:47.604 passed 00:10:47.604 Test: blockdev nvme passthru rw ...passed 00:10:47.604 Test: blockdev nvme passthru vendor specific ...[2024-12-10 12:19:09.558504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:47.604 [2024-12-10 12:19:09.558523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:47.604 [2024-12-10 12:19:09.558640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:47.604 [2024-12-10 12:19:09.558650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:47.604 [2024-12-10 12:19:09.558753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:47.604 [2024-12-10 12:19:09.558763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:47.604 [2024-12-10 12:19:09.558870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:47.604 [2024-12-10 12:19:09.558881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:47.604 passed 00:10:47.604 Test: blockdev nvme admin passthru ...passed 00:10:47.604 Test: blockdev copy ...passed 00:10:47.604 00:10:47.604 Run Summary: Type Total Ran Passed Failed Inactive 00:10:47.604 suites 1 1 n/a 0 0 00:10:47.604 tests 23 23 23 0 0 00:10:47.604 asserts 152 152 152 0 n/a 00:10:47.604 00:10:47.604 Elapsed time = 1.233 seconds 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.604 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.863 rmmod nvme_tcp 00:10:47.863 rmmod nvme_fabrics 00:10:47.863 rmmod nvme_keyring 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1525048 ']' 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1525048 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1525048 ']' 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1525048 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1525048 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1525048' 00:10:47.863 killing process with pid 1525048 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1525048 00:10:47.863 12:19:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1525048 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.122 12:19:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.028 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.028 00:10:50.028 real 0m10.118s 00:10:50.028 user 0m10.209s 00:10:50.028 sys 0m4.929s 00:10:50.028 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.028 12:19:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.028 ************************************ 00:10:50.028 END TEST nvmf_bdevio 00:10:50.028 ************************************ 00:10:50.028 12:19:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:50.028 00:10:50.028 real 4m37.262s 00:10:50.028 user 10m23.391s 00:10:50.028 sys 1m36.205s 00:10:50.028 12:19:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.028 12:19:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:50.028 ************************************ 00:10:50.028 END TEST nvmf_target_core 00:10:50.028 ************************************ 00:10:50.288 12:19:12 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:50.288 12:19:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.288 12:19:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.288 12:19:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:50.288 ************************************ 00:10:50.288 START TEST nvmf_target_extra 00:10:50.288 ************************************ 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:50.288 * Looking for test storage... 00:10:50.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:50.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.288 --rc genhtml_branch_coverage=1 00:10:50.288 --rc genhtml_function_coverage=1 00:10:50.288 --rc genhtml_legend=1 00:10:50.288 --rc geninfo_all_blocks=1 00:10:50.288 --rc geninfo_unexecuted_blocks=1 00:10:50.288 00:10:50.288 ' 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:50.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.288 --rc genhtml_branch_coverage=1 00:10:50.288 --rc genhtml_function_coverage=1 00:10:50.288 --rc genhtml_legend=1 00:10:50.288 --rc geninfo_all_blocks=1 00:10:50.288 --rc geninfo_unexecuted_blocks=1 00:10:50.288 00:10:50.288 ' 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:50.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.288 --rc genhtml_branch_coverage=1 00:10:50.288 --rc genhtml_function_coverage=1 00:10:50.288 --rc genhtml_legend=1 00:10:50.288 --rc geninfo_all_blocks=1 00:10:50.288 --rc geninfo_unexecuted_blocks=1 00:10:50.288 00:10:50.288 ' 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:50.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.288 --rc genhtml_branch_coverage=1 00:10:50.288 --rc genhtml_function_coverage=1 00:10:50.288 --rc genhtml_legend=1 00:10:50.288 --rc geninfo_all_blocks=1 00:10:50.288 --rc geninfo_unexecuted_blocks=1 00:10:50.288 00:10:50.288 ' 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.288 12:19:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:50.548 ************************************ 00:10:50.548 START TEST nvmf_example 00:10:50.548 ************************************ 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:50.548 * Looking for test storage... 00:10:50.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.548 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:50.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.549 --rc genhtml_branch_coverage=1 00:10:50.549 --rc genhtml_function_coverage=1 00:10:50.549 --rc genhtml_legend=1 00:10:50.549 --rc geninfo_all_blocks=1 00:10:50.549 --rc geninfo_unexecuted_blocks=1 00:10:50.549 00:10:50.549 ' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:50.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.549 --rc genhtml_branch_coverage=1 00:10:50.549 --rc genhtml_function_coverage=1 00:10:50.549 --rc genhtml_legend=1 00:10:50.549 --rc geninfo_all_blocks=1 00:10:50.549 --rc geninfo_unexecuted_blocks=1 00:10:50.549 00:10:50.549 ' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:50.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.549 --rc genhtml_branch_coverage=1 00:10:50.549 --rc genhtml_function_coverage=1 00:10:50.549 --rc genhtml_legend=1 00:10:50.549 --rc geninfo_all_blocks=1 00:10:50.549 --rc geninfo_unexecuted_blocks=1 00:10:50.549 00:10:50.549 ' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:50.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.549 --rc genhtml_branch_coverage=1 00:10:50.549 --rc genhtml_function_coverage=1 00:10:50.549 --rc genhtml_legend=1 00:10:50.549 --rc geninfo_all_blocks=1 00:10:50.549 --rc geninfo_unexecuted_blocks=1 00:10:50.549 00:10:50.549 ' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.549 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.808 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.808 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.808 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.808 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.376 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:57.377 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:57.377 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:57.377 Found net devices under 0000:86:00.0: cvl_0_0 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:57.377 Found net devices under 0000:86:00.1: cvl_0_1 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.377 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:10:57.378 00:10:57.378 --- 10.0.0.2 ping statistics --- 00:10:57.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.378 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:10:57.378 00:10:57.378 --- 10.0.0.1 ping statistics --- 00:10:57.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.378 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1529101 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1529101 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1529101 ']' 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.378 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.378 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.378 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:57.378 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:57.378 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.378 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:10:57.637 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:09.842 Initializing NVMe Controllers 00:11:09.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:09.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:09.842 Initialization complete. Launching workers. 00:11:09.842 ======================================================== 00:11:09.842 Latency(us) 00:11:09.842 Device Information : IOPS MiB/s Average min max 00:11:09.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17964.94 70.18 3562.34 516.08 15482.01 00:11:09.842 ======================================================== 00:11:09.842 Total : 17964.94 70.18 3562.34 516.08 15482.01 00:11:09.842 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:09.842 rmmod nvme_tcp 00:11:09.842 rmmod nvme_fabrics 00:11:09.842 rmmod nvme_keyring 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1529101 ']' 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1529101 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1529101 ']' 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1529101 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1529101 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1529101' 00:11:09.842 killing process with pid 1529101 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1529101 00:11:09.842 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1529101 00:11:09.842 nvmf threads initialize successfully 00:11:09.842 bdev subsystem init successfully 00:11:09.842 created a nvmf target service 00:11:09.842 create targets's poll groups done 00:11:09.842 all subsystems of target started 00:11:09.842 nvmf target is running 00:11:09.843 all subsystems of target stopped 00:11:09.843 destroy targets's poll groups done 00:11:09.843 destroyed the nvmf target service 00:11:09.843 bdev subsystem finish successfully 00:11:09.843 nvmf threads destroy successfully 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.843 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.101 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.101 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:10.101 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.101 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:10.101 00:11:10.101 real 0m19.730s 00:11:10.101 user 0m45.901s 00:11:10.101 sys 0m5.992s 00:11:10.101 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.101 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:10.101 ************************************ 00:11:10.101 END TEST nvmf_example 00:11:10.101 ************************************ 00:11:10.101 12:19:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:10.101 12:19:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.101 12:19:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.101 12:19:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.361 ************************************ 00:11:10.361 START TEST nvmf_filesystem 00:11:10.361 ************************************ 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:10.361 * Looking for test storage... 00:11:10.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.361 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.362 --rc genhtml_branch_coverage=1 00:11:10.362 --rc genhtml_function_coverage=1 00:11:10.362 --rc genhtml_legend=1 00:11:10.362 --rc geninfo_all_blocks=1 00:11:10.362 --rc geninfo_unexecuted_blocks=1 00:11:10.362 00:11:10.362 ' 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.362 --rc genhtml_branch_coverage=1 00:11:10.362 --rc genhtml_function_coverage=1 00:11:10.362 --rc genhtml_legend=1 00:11:10.362 --rc geninfo_all_blocks=1 00:11:10.362 --rc geninfo_unexecuted_blocks=1 00:11:10.362 00:11:10.362 ' 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.362 --rc genhtml_branch_coverage=1 00:11:10.362 --rc genhtml_function_coverage=1 00:11:10.362 --rc genhtml_legend=1 00:11:10.362 --rc geninfo_all_blocks=1 00:11:10.362 --rc geninfo_unexecuted_blocks=1 00:11:10.362 00:11:10.362 ' 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:10.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.362 --rc genhtml_branch_coverage=1 00:11:10.362 --rc genhtml_function_coverage=1 00:11:10.362 --rc genhtml_legend=1 00:11:10.362 --rc geninfo_all_blocks=1 00:11:10.362 --rc geninfo_unexecuted_blocks=1 00:11:10.362 00:11:10.362 ' 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output ']' 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/build_config.sh ]] 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/build_config.sh 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:10.362 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/applications.sh 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/applications.sh 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/config.h ]] 00:11:10.363 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:10.363 #define SPDK_CONFIG_H 00:11:10.363 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:10.363 #define SPDK_CONFIG_APPS 1 00:11:10.363 #define SPDK_CONFIG_ARCH native 00:11:10.363 #undef SPDK_CONFIG_ASAN 00:11:10.363 #undef SPDK_CONFIG_AVAHI 00:11:10.363 #undef SPDK_CONFIG_CET 00:11:10.363 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:10.363 #define SPDK_CONFIG_COVERAGE 1 00:11:10.363 #define SPDK_CONFIG_CROSS_PREFIX 00:11:10.363 #undef SPDK_CONFIG_CRYPTO 00:11:10.363 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:10.363 #undef SPDK_CONFIG_CUSTOMOCF 00:11:10.363 #undef SPDK_CONFIG_DAOS 00:11:10.363 #define SPDK_CONFIG_DAOS_DIR 00:11:10.363 #define SPDK_CONFIG_DEBUG 1 00:11:10.363 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:10.363 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:11:10.363 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:10.363 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:10.363 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:10.363 #undef SPDK_CONFIG_DPDK_UADK 00:11:10.363 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:11:10.363 #define SPDK_CONFIG_EXAMPLES 1 00:11:10.363 #undef SPDK_CONFIG_FC 00:11:10.363 #define SPDK_CONFIG_FC_PATH 00:11:10.363 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:10.363 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:10.363 #define SPDK_CONFIG_FSDEV 1 00:11:10.363 #undef SPDK_CONFIG_FUSE 00:11:10.363 #undef SPDK_CONFIG_FUZZER 00:11:10.363 #define SPDK_CONFIG_FUZZER_LIB 00:11:10.363 #undef SPDK_CONFIG_GOLANG 00:11:10.363 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:10.363 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:10.363 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:10.363 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:10.363 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:10.363 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:10.363 #undef SPDK_CONFIG_HAVE_LZ4 00:11:10.363 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:10.363 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:10.363 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:10.363 #define SPDK_CONFIG_IDXD 1 00:11:10.363 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:10.363 #undef SPDK_CONFIG_IPSEC_MB 00:11:10.363 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:10.363 #define SPDK_CONFIG_ISAL 1 00:11:10.363 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:10.363 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:10.363 #define SPDK_CONFIG_LIBDIR 00:11:10.363 #undef SPDK_CONFIG_LTO 00:11:10.363 #define SPDK_CONFIG_MAX_LCORES 128 00:11:10.363 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:10.363 #define SPDK_CONFIG_NVME_CUSE 1 00:11:10.363 #undef SPDK_CONFIG_OCF 00:11:10.363 #define SPDK_CONFIG_OCF_PATH 00:11:10.363 #define SPDK_CONFIG_OPENSSL_PATH 00:11:10.363 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:10.363 #define SPDK_CONFIG_PGO_DIR 00:11:10.363 #undef SPDK_CONFIG_PGO_USE 00:11:10.363 #define SPDK_CONFIG_PREFIX /usr/local 00:11:10.363 #undef SPDK_CONFIG_RAID5F 00:11:10.363 #undef SPDK_CONFIG_RBD 00:11:10.363 #define SPDK_CONFIG_RDMA 1 00:11:10.363 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:10.363 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:10.363 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:10.363 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:10.363 #define SPDK_CONFIG_SHARED 1 00:11:10.363 #undef SPDK_CONFIG_SMA 00:11:10.363 #define SPDK_CONFIG_TESTS 1 00:11:10.363 #undef SPDK_CONFIG_TSAN 00:11:10.363 #define SPDK_CONFIG_UBLK 1 00:11:10.363 #define SPDK_CONFIG_UBSAN 1 00:11:10.364 #undef SPDK_CONFIG_UNIT_TESTS 00:11:10.364 #undef SPDK_CONFIG_URING 00:11:10.364 #define SPDK_CONFIG_URING_PATH 00:11:10.364 #undef SPDK_CONFIG_URING_ZNS 00:11:10.364 #undef SPDK_CONFIG_USDT 00:11:10.364 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:10.364 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:10.364 #define SPDK_CONFIG_VFIO_USER 1 00:11:10.364 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:10.364 #define SPDK_CONFIG_VHOST 1 00:11:10.364 #define SPDK_CONFIG_VIRTIO 1 00:11:10.364 #undef SPDK_CONFIG_VTUNE 00:11:10.364 #define SPDK_CONFIG_VTUNE_DIR 00:11:10.364 #define SPDK_CONFIG_WERROR 1 00:11:10.364 #define SPDK_CONFIG_WPDK_DIR 00:11:10.364 #undef SPDK_CONFIG_XNVME 00:11:10.364 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/common 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/common 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/../../../ 00:11:10.364 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.run_test_name 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:10.626 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power ]] 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:10.627 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/ar-xnvme-fixer 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/ar-xnvme-fixer 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:10.628 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1531457 ]] 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1531457 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.XmW7Lq 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target /tmp/spdk.XmW7Lq/tests/target /tmp/spdk.XmW7Lq 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=194295758848 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=201248804864 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6953046016 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=100614369280 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100624400384 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=40226734080 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=40249761792 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23027712 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=100624044032 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100624404480 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=360448 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20124864512 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20124876800 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:10.629 * Looking for test storage... 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=194295758848 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9167638528 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:10.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.629 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:10.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.630 --rc genhtml_branch_coverage=1 00:11:10.630 --rc genhtml_function_coverage=1 00:11:10.630 --rc genhtml_legend=1 00:11:10.630 --rc geninfo_all_blocks=1 00:11:10.630 --rc geninfo_unexecuted_blocks=1 00:11:10.630 00:11:10.630 ' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:10.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.630 --rc genhtml_branch_coverage=1 00:11:10.630 --rc genhtml_function_coverage=1 00:11:10.630 --rc genhtml_legend=1 00:11:10.630 --rc geninfo_all_blocks=1 00:11:10.630 --rc geninfo_unexecuted_blocks=1 00:11:10.630 00:11:10.630 ' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:10.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.630 --rc genhtml_branch_coverage=1 00:11:10.630 --rc genhtml_function_coverage=1 00:11:10.630 --rc genhtml_legend=1 00:11:10.630 --rc geninfo_all_blocks=1 00:11:10.630 --rc geninfo_unexecuted_blocks=1 00:11:10.630 00:11:10.630 ' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:10.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.630 --rc genhtml_branch_coverage=1 00:11:10.630 --rc genhtml_function_coverage=1 00:11:10.630 --rc genhtml_legend=1 00:11:10.630 --rc geninfo_all_blocks=1 00:11:10.630 --rc geninfo_unexecuted_blocks=1 00:11:10.630 00:11:10.630 ' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:10.630 12:19:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:17.203 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:17.203 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:17.203 Found net devices under 0000:86:00.0: cvl_0_0 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:17.203 Found net devices under 0000:86:00.1: cvl_0_1 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.203 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:11:17.203 00:11:17.203 --- 10.0.0.2 ping statistics --- 00:11:17.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.204 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:11:17.204 00:11:17.204 --- 10.0.0.1 ping statistics --- 00:11:17.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.204 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 ************************************ 00:11:17.204 START TEST nvmf_filesystem_no_in_capsule 00:11:17.204 ************************************ 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1534549 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1534549 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1534549 ']' 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.204 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 [2024-12-10 12:19:38.847968] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:11:17.204 [2024-12-10 12:19:38.848013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.204 [2024-12-10 12:19:38.926874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.204 [2024-12-10 12:19:38.968397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.204 [2024-12-10 12:19:38.968433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.204 [2024-12-10 12:19:38.968440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.204 [2024-12-10 12:19:38.968446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.204 [2024-12-10 12:19:38.968451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.204 [2024-12-10 12:19:38.970009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.204 [2024-12-10 12:19:38.970117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.204 [2024-12-10 12:19:38.970224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.204 [2024-12-10 12:19:38.970225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 [2024-12-10 12:19:39.108662] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 Malloc1 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 [2024-12-10 12:19:39.268403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.204 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:17.204 { 00:11:17.204 "name": "Malloc1", 00:11:17.204 "aliases": [ 00:11:17.204 "7f5650cb-12e8-47e5-91ca-90e2184bb270" 00:11:17.204 ], 00:11:17.204 "product_name": "Malloc disk", 00:11:17.204 "block_size": 512, 00:11:17.204 "num_blocks": 1048576, 00:11:17.204 "uuid": "7f5650cb-12e8-47e5-91ca-90e2184bb270", 00:11:17.204 "assigned_rate_limits": { 00:11:17.204 "rw_ios_per_sec": 0, 00:11:17.204 "rw_mbytes_per_sec": 0, 00:11:17.204 "r_mbytes_per_sec": 0, 00:11:17.204 "w_mbytes_per_sec": 0 00:11:17.204 }, 00:11:17.204 "claimed": true, 00:11:17.204 "claim_type": "exclusive_write", 00:11:17.204 "zoned": false, 00:11:17.204 "supported_io_types": { 00:11:17.204 "read": true, 00:11:17.204 "write": true, 00:11:17.204 "unmap": true, 00:11:17.204 "flush": true, 00:11:17.204 "reset": true, 00:11:17.204 "nvme_admin": false, 00:11:17.204 "nvme_io": false, 00:11:17.204 "nvme_io_md": false, 00:11:17.205 "write_zeroes": true, 00:11:17.205 "zcopy": true, 00:11:17.205 "get_zone_info": false, 00:11:17.205 "zone_management": false, 00:11:17.205 "zone_append": false, 00:11:17.205 "compare": false, 00:11:17.205 "compare_and_write": false, 00:11:17.205 "abort": true, 00:11:17.205 "seek_hole": false, 00:11:17.205 "seek_data": false, 00:11:17.205 "copy": true, 00:11:17.205 "nvme_iov_md": false 00:11:17.205 }, 00:11:17.205 "memory_domains": [ 00:11:17.205 { 00:11:17.205 "dma_device_id": "system", 00:11:17.205 "dma_device_type": 1 00:11:17.205 }, 00:11:17.205 { 00:11:17.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.205 "dma_device_type": 2 00:11:17.205 } 00:11:17.205 ], 00:11:17.205 "driver_specific": {} 00:11:17.205 } 00:11:17.205 ]' 00:11:17.205 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:17.205 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:17.205 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:17.463 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:17.464 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:17.464 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:17.464 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:17.464 12:19:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.399 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.399 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:18.399 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.399 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:18.399 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:20.933 12:19:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.869 ************************************ 00:11:21.869 START TEST filesystem_ext4 00:11:21.869 ************************************ 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:21.869 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:21.870 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:21.870 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:21.870 mke2fs 1.47.0 (5-Feb-2023) 00:11:22.128 Discarding device blocks: 0/522240 done 00:11:22.129 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:22.129 Filesystem UUID: 9688522d-72ee-4ff2-968f-3a941ecce6ed 00:11:22.129 Superblock backups stored on blocks: 00:11:22.129 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:22.129 00:11:22.129 Allocating group tables: 0/64 done 00:11:22.129 Writing inode tables: 0/64 done 00:11:22.387 Creating journal (8192 blocks): done 00:11:22.387 Writing superblocks and filesystem accounting information: 0/64 done 00:11:22.387 00:11:22.387 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:22.387 12:19:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1534549 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.657 00:11:27.657 real 0m5.829s 00:11:27.657 user 0m0.029s 00:11:27.657 sys 0m0.065s 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.657 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:27.657 ************************************ 00:11:27.657 END TEST filesystem_ext4 00:11:27.657 ************************************ 00:11:27.915 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.916 ************************************ 00:11:27.916 START TEST filesystem_btrfs 00:11:27.916 ************************************ 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:27.916 12:19:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:27.916 btrfs-progs v6.8.1 00:11:27.916 See https://btrfs.readthedocs.io for more information. 00:11:27.916 00:11:27.916 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:27.916 NOTE: several default settings have changed in version 5.15, please make sure 00:11:27.916 this does not affect your deployments: 00:11:27.916 - DUP for metadata (-m dup) 00:11:27.916 - enabled no-holes (-O no-holes) 00:11:27.916 - enabled free-space-tree (-R free-space-tree) 00:11:27.916 00:11:27.916 Label: (null) 00:11:27.916 UUID: edf93bdb-f3b2-4038-ad08-13e885ff988c 00:11:27.916 Node size: 16384 00:11:27.916 Sector size: 4096 (CPU page size: 4096) 00:11:27.916 Filesystem size: 510.00MiB 00:11:27.916 Block group profiles: 00:11:27.916 Data: single 8.00MiB 00:11:27.916 Metadata: DUP 32.00MiB 00:11:27.916 System: DUP 8.00MiB 00:11:27.916 SSD detected: yes 00:11:27.916 Zoned device: no 00:11:27.916 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:27.916 Checksum: crc32c 00:11:27.916 Number of devices: 1 00:11:27.916 Devices: 00:11:27.916 ID SIZE PATH 00:11:27.916 1 510.00MiB /dev/nvme0n1p1 00:11:27.916 00:11:27.916 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:27.916 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1534549 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:28.852 00:11:28.852 real 0m1.071s 00:11:28.852 user 0m0.031s 00:11:28.852 sys 0m0.108s 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:28.852 ************************************ 00:11:28.852 END TEST filesystem_btrfs 00:11:28.852 ************************************ 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.852 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.852 ************************************ 00:11:28.852 START TEST filesystem_xfs 00:11:28.852 ************************************ 00:11:28.852 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:28.852 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:28.852 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.852 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:28.852 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:28.852 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:28.852 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:28.852 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:29.111 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:29.111 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:29.111 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:29.111 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:29.111 = sectsz=512 attr=2, projid32bit=1 00:11:29.111 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:29.111 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:29.111 data = bsize=4096 blocks=130560, imaxpct=25 00:11:29.111 = sunit=0 swidth=0 blks 00:11:29.111 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:29.111 log =internal log bsize=4096 blocks=16384, version=2 00:11:29.111 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:29.111 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:30.047 Discarding blocks...Done. 00:11:30.047 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:30.047 12:19:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1534549 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.580 00:11:32.580 real 0m3.248s 00:11:32.580 user 0m0.041s 00:11:32.580 sys 0m0.056s 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.580 ************************************ 00:11:32.580 END TEST filesystem_xfs 00:11:32.580 ************************************ 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.580 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1534549 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1534549 ']' 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1534549 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1534549 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1534549' 00:11:32.581 killing process with pid 1534549 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1534549 00:11:32.581 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1534549 00:11:32.840 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:32.841 00:11:32.841 real 0m16.029s 00:11:32.841 user 1m3.046s 00:11:32.841 sys 0m1.331s 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.841 ************************************ 00:11:32.841 END TEST nvmf_filesystem_no_in_capsule 00:11:32.841 ************************************ 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.841 ************************************ 00:11:32.841 START TEST nvmf_filesystem_in_capsule 00:11:32.841 ************************************ 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1537438 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1537438 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1537438 ']' 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.841 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.841 [2024-12-10 12:19:54.955915] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:11:32.841 [2024-12-10 12:19:54.955962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.104 [2024-12-10 12:19:55.041480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.104 [2024-12-10 12:19:55.081215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.104 [2024-12-10 12:19:55.081253] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.104 [2024-12-10 12:19:55.081262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.104 [2024-12-10 12:19:55.081268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.104 [2024-12-10 12:19:55.081273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.104 [2024-12-10 12:19:55.082716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.104 [2024-12-10 12:19:55.082828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.104 [2024-12-10 12:19:55.082933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.104 [2024-12-10 12:19:55.082933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.104 [2024-12-10 12:19:55.229032] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.104 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.382 Malloc1 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.382 [2024-12-10 12:19:55.393350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.382 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:33.382 { 00:11:33.382 "name": "Malloc1", 00:11:33.382 "aliases": [ 00:11:33.382 "b346e4ab-6384-47e4-8ac1-99292c7bd6ad" 00:11:33.382 ], 00:11:33.382 "product_name": "Malloc disk", 00:11:33.382 "block_size": 512, 00:11:33.382 "num_blocks": 1048576, 00:11:33.383 "uuid": "b346e4ab-6384-47e4-8ac1-99292c7bd6ad", 00:11:33.383 "assigned_rate_limits": { 00:11:33.383 "rw_ios_per_sec": 0, 00:11:33.383 "rw_mbytes_per_sec": 0, 00:11:33.383 "r_mbytes_per_sec": 0, 00:11:33.383 "w_mbytes_per_sec": 0 00:11:33.383 }, 00:11:33.383 "claimed": true, 00:11:33.383 "claim_type": "exclusive_write", 00:11:33.383 "zoned": false, 00:11:33.383 "supported_io_types": { 00:11:33.383 "read": true, 00:11:33.383 "write": true, 00:11:33.383 "unmap": true, 00:11:33.383 "flush": true, 00:11:33.383 "reset": true, 00:11:33.383 "nvme_admin": false, 00:11:33.383 "nvme_io": false, 00:11:33.383 "nvme_io_md": false, 00:11:33.383 "write_zeroes": true, 00:11:33.383 "zcopy": true, 00:11:33.383 "get_zone_info": false, 00:11:33.383 "zone_management": false, 00:11:33.383 "zone_append": false, 00:11:33.383 "compare": false, 00:11:33.383 "compare_and_write": false, 00:11:33.383 "abort": true, 00:11:33.383 "seek_hole": false, 00:11:33.383 "seek_data": false, 00:11:33.383 "copy": true, 00:11:33.383 "nvme_iov_md": false 00:11:33.383 }, 00:11:33.383 "memory_domains": [ 00:11:33.383 { 00:11:33.383 "dma_device_id": "system", 00:11:33.383 "dma_device_type": 1 00:11:33.383 }, 00:11:33.383 { 00:11:33.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.383 "dma_device_type": 2 00:11:33.383 } 00:11:33.383 ], 00:11:33.383 "driver_specific": {} 00:11:33.383 } 00:11:33.383 ]' 00:11:33.383 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:33.383 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:33.383 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:33.383 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:33.383 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:33.383 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:33.383 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:33.383 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.841 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.841 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:34.841 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.841 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:34.841 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:36.752 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:37.010 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:37.576 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.511 ************************************ 00:11:38.511 START TEST filesystem_in_capsule_ext4 00:11:38.511 ************************************ 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:38.511 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:38.512 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:38.512 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:38.512 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:38.512 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:38.512 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:38.512 mke2fs 1.47.0 (5-Feb-2023) 00:11:38.512 Discarding device blocks: 0/522240 done 00:11:38.512 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:38.512 Filesystem UUID: f45d6646-c432-4d3e-8a84-5d4f125b1dd0 00:11:38.512 Superblock backups stored on blocks: 00:11:38.512 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:38.512 00:11:38.512 Allocating group tables: 0/64 done 00:11:38.512 Writing inode tables: 0/64 done 00:11:39.894 Creating journal (8192 blocks): done 00:11:39.894 Writing superblocks and filesystem accounting information: 0/64 done 00:11:39.894 00:11:39.894 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:39.894 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.161 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1537438 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:45.419 00:11:45.419 real 0m6.905s 00:11:45.419 user 0m0.029s 00:11:45.419 sys 0m0.067s 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:45.419 ************************************ 00:11:45.419 END TEST filesystem_in_capsule_ext4 00:11:45.419 ************************************ 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.419 ************************************ 00:11:45.419 START TEST filesystem_in_capsule_btrfs 00:11:45.419 ************************************ 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:45.419 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:45.677 btrfs-progs v6.8.1 00:11:45.678 See https://btrfs.readthedocs.io for more information. 00:11:45.678 00:11:45.678 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:45.678 NOTE: several default settings have changed in version 5.15, please make sure 00:11:45.678 this does not affect your deployments: 00:11:45.678 - DUP for metadata (-m dup) 00:11:45.678 - enabled no-holes (-O no-holes) 00:11:45.678 - enabled free-space-tree (-R free-space-tree) 00:11:45.678 00:11:45.678 Label: (null) 00:11:45.678 UUID: d18e8e6b-f3ba-4769-bdf4-f288b1d21727 00:11:45.678 Node size: 16384 00:11:45.678 Sector size: 4096 (CPU page size: 4096) 00:11:45.678 Filesystem size: 510.00MiB 00:11:45.678 Block group profiles: 00:11:45.678 Data: single 8.00MiB 00:11:45.678 Metadata: DUP 32.00MiB 00:11:45.678 System: DUP 8.00MiB 00:11:45.678 SSD detected: yes 00:11:45.678 Zoned device: no 00:11:45.678 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:45.678 Checksum: crc32c 00:11:45.678 Number of devices: 1 00:11:45.678 Devices: 00:11:45.678 ID SIZE PATH 00:11:45.678 1 510.00MiB /dev/nvme0n1p1 00:11:45.678 00:11:45.678 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:45.678 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:45.936 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:45.936 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:45.936 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:45.936 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:45.936 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:45.936 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:45.936 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1537438 00:11:46.195 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.195 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.195 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.195 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.195 00:11:46.196 real 0m0.624s 00:11:46.196 user 0m0.028s 00:11:46.196 sys 0m0.109s 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:46.196 ************************************ 00:11:46.196 END TEST filesystem_in_capsule_btrfs 00:11:46.196 ************************************ 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.196 ************************************ 00:11:46.196 START TEST filesystem_in_capsule_xfs 00:11:46.196 ************************************ 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:46.196 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:46.196 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:46.196 = sectsz=512 attr=2, projid32bit=1 00:11:46.196 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:46.196 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:46.196 data = bsize=4096 blocks=130560, imaxpct=25 00:11:46.196 = sunit=0 swidth=0 blks 00:11:46.196 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:46.196 log =internal log bsize=4096 blocks=16384, version=2 00:11:46.196 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:46.196 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:47.131 Discarding blocks...Done. 00:11:47.131 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:47.131 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:49.034 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1537438 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:49.034 00:11:49.034 real 0m2.876s 00:11:49.034 user 0m0.022s 00:11:49.034 sys 0m0.076s 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:49.034 ************************************ 00:11:49.034 END TEST filesystem_in_capsule_xfs 00:11:49.034 ************************************ 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:49.034 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:49.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1537438 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1537438 ']' 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1537438 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537438 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537438' 00:11:49.293 killing process with pid 1537438 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1537438 00:11:49.293 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1537438 00:11:49.552 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:49.552 00:11:49.552 real 0m16.751s 00:11:49.552 user 1m5.848s 00:11:49.552 sys 0m1.416s 00:11:49.552 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.552 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.552 ************************************ 00:11:49.552 END TEST nvmf_filesystem_in_capsule 00:11:49.552 ************************************ 00:11:49.552 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:49.552 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:49.552 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:49.552 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:49.552 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:49.552 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:49.552 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:49.552 rmmod nvme_tcp 00:11:49.552 rmmod nvme_fabrics 00:11:49.811 rmmod nvme_keyring 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.811 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.716 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:51.716 00:11:51.716 real 0m41.541s 00:11:51.716 user 2m10.909s 00:11:51.716 sys 0m7.503s 00:11:51.716 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.716 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.716 ************************************ 00:11:51.716 END TEST nvmf_filesystem 00:11:51.716 ************************************ 00:11:51.716 12:20:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:51.716 12:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:51.716 12:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.716 12:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:51.975 ************************************ 00:11:51.975 START TEST nvmf_target_discovery 00:11:51.976 ************************************ 00:11:51.976 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:51.976 * Looking for test storage... 00:11:51.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:51.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.976 --rc genhtml_branch_coverage=1 00:11:51.976 --rc genhtml_function_coverage=1 00:11:51.976 --rc genhtml_legend=1 00:11:51.976 --rc geninfo_all_blocks=1 00:11:51.976 --rc geninfo_unexecuted_blocks=1 00:11:51.976 00:11:51.976 ' 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:51.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.976 --rc genhtml_branch_coverage=1 00:11:51.976 --rc genhtml_function_coverage=1 00:11:51.976 --rc genhtml_legend=1 00:11:51.976 --rc geninfo_all_blocks=1 00:11:51.976 --rc geninfo_unexecuted_blocks=1 00:11:51.976 00:11:51.976 ' 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:51.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.976 --rc genhtml_branch_coverage=1 00:11:51.976 --rc genhtml_function_coverage=1 00:11:51.976 --rc genhtml_legend=1 00:11:51.976 --rc geninfo_all_blocks=1 00:11:51.976 --rc geninfo_unexecuted_blocks=1 00:11:51.976 00:11:51.976 ' 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:51.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.976 --rc genhtml_branch_coverage=1 00:11:51.976 --rc genhtml_function_coverage=1 00:11:51.976 --rc genhtml_legend=1 00:11:51.976 --rc geninfo_all_blocks=1 00:11:51.976 --rc geninfo_unexecuted_blocks=1 00:11:51.976 00:11:51.976 ' 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.976 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:51.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:51.977 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.544 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:58.544 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:58.544 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:58.545 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:58.545 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:58.545 Found net devices under 0000:86:00.0: cvl_0_0 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:58.545 Found net devices under 0000:86:00.1: cvl_0_1 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:58.545 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:58.545 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:58.545 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:58.545 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:58.545 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:58.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:58.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:11:58.545 00:11:58.546 --- 10.0.0.2 ping statistics --- 00:11:58.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.546 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:58.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:58.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:11:58.546 00:11:58.546 --- 10.0.0.1 ping statistics --- 00:11:58.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:58.546 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1543945 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1543945 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1543945 ']' 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 [2024-12-10 12:20:20.259023] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:11:58.546 [2024-12-10 12:20:20.259064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.546 [2024-12-10 12:20:20.340839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.546 [2024-12-10 12:20:20.383165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.546 [2024-12-10 12:20:20.383202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.546 [2024-12-10 12:20:20.383209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.546 [2024-12-10 12:20:20.383215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.546 [2024-12-10 12:20:20.383221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.546 [2024-12-10 12:20:20.384647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.546 [2024-12-10 12:20:20.384751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.546 [2024-12-10 12:20:20.384766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.546 [2024-12-10 12:20:20.384771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 [2024-12-10 12:20:20.530184] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 Null1 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 [2024-12-10 12:20:20.583292] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 Null2 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 Null3 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:58.546 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.547 Null4 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.547 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:58.806 00:11:58.806 Discovery Log Number of Records 6, Generation counter 6 00:11:58.806 =====Discovery Log Entry 0====== 00:11:58.806 trtype: tcp 00:11:58.806 adrfam: ipv4 00:11:58.807 subtype: current discovery subsystem 00:11:58.807 treq: not required 00:11:58.807 portid: 0 00:11:58.807 trsvcid: 4420 00:11:58.807 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:58.807 traddr: 10.0.0.2 00:11:58.807 eflags: explicit discovery connections, duplicate discovery information 00:11:58.807 sectype: none 00:11:58.807 =====Discovery Log Entry 1====== 00:11:58.807 trtype: tcp 00:11:58.807 adrfam: ipv4 00:11:58.807 subtype: nvme subsystem 00:11:58.807 treq: not required 00:11:58.807 portid: 0 00:11:58.807 trsvcid: 4420 00:11:58.807 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:58.807 traddr: 10.0.0.2 00:11:58.807 eflags: none 00:11:58.807 sectype: none 00:11:58.807 =====Discovery Log Entry 2====== 00:11:58.807 trtype: tcp 00:11:58.807 adrfam: ipv4 00:11:58.807 subtype: nvme subsystem 00:11:58.807 treq: not required 00:11:58.807 portid: 0 00:11:58.807 trsvcid: 4420 00:11:58.807 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:58.807 traddr: 10.0.0.2 00:11:58.807 eflags: none 00:11:58.807 sectype: none 00:11:58.807 =====Discovery Log Entry 3====== 00:11:58.807 trtype: tcp 00:11:58.807 adrfam: ipv4 00:11:58.807 subtype: nvme subsystem 00:11:58.807 treq: not required 00:11:58.807 portid: 0 00:11:58.807 trsvcid: 4420 00:11:58.807 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:58.807 traddr: 10.0.0.2 00:11:58.807 eflags: none 00:11:58.807 sectype: none 00:11:58.807 =====Discovery Log Entry 4====== 00:11:58.807 trtype: tcp 00:11:58.807 adrfam: ipv4 00:11:58.807 subtype: nvme subsystem 00:11:58.807 treq: not required 00:11:58.807 portid: 0 00:11:58.807 trsvcid: 4420 00:11:58.807 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:58.807 traddr: 10.0.0.2 00:11:58.807 eflags: none 00:11:58.807 sectype: none 00:11:58.807 =====Discovery Log Entry 5====== 00:11:58.807 trtype: tcp 00:11:58.807 adrfam: ipv4 00:11:58.807 subtype: discovery subsystem referral 00:11:58.807 treq: not required 00:11:58.807 portid: 0 00:11:58.807 trsvcid: 4430 00:11:58.807 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:58.807 traddr: 10.0.0.2 00:11:58.807 eflags: none 00:11:58.807 sectype: none 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:58.807 Perform nvmf subsystem discovery via RPC 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.807 [ 00:11:58.807 { 00:11:58.807 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:58.807 "subtype": "Discovery", 00:11:58.807 "listen_addresses": [ 00:11:58.807 { 00:11:58.807 "trtype": "TCP", 00:11:58.807 "adrfam": "IPv4", 00:11:58.807 "traddr": "10.0.0.2", 00:11:58.807 "trsvcid": "4420" 00:11:58.807 } 00:11:58.807 ], 00:11:58.807 "allow_any_host": true, 00:11:58.807 "hosts": [] 00:11:58.807 }, 00:11:58.807 { 00:11:58.807 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.807 "subtype": "NVMe", 00:11:58.807 "listen_addresses": [ 00:11:58.807 { 00:11:58.807 "trtype": "TCP", 00:11:58.807 "adrfam": "IPv4", 00:11:58.807 "traddr": "10.0.0.2", 00:11:58.807 "trsvcid": "4420" 00:11:58.807 } 00:11:58.807 ], 00:11:58.807 "allow_any_host": true, 00:11:58.807 "hosts": [], 00:11:58.807 "serial_number": "SPDK00000000000001", 00:11:58.807 "model_number": "SPDK bdev Controller", 00:11:58.807 "max_namespaces": 32, 00:11:58.807 "min_cntlid": 1, 00:11:58.807 "max_cntlid": 65519, 00:11:58.807 "namespaces": [ 00:11:58.807 { 00:11:58.807 "nsid": 1, 00:11:58.807 "bdev_name": "Null1", 00:11:58.807 "name": "Null1", 00:11:58.807 "nguid": "0946CA048A0445A6BF1733C8FC477DE1", 00:11:58.807 "uuid": "0946ca04-8a04-45a6-bf17-33c8fc477de1" 00:11:58.807 } 00:11:58.807 ] 00:11:58.807 }, 00:11:58.807 { 00:11:58.807 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:58.807 "subtype": "NVMe", 00:11:58.807 "listen_addresses": [ 00:11:58.807 { 00:11:58.807 "trtype": "TCP", 00:11:58.807 "adrfam": "IPv4", 00:11:58.807 "traddr": "10.0.0.2", 00:11:58.807 "trsvcid": "4420" 00:11:58.807 } 00:11:58.807 ], 00:11:58.807 "allow_any_host": true, 00:11:58.807 "hosts": [], 00:11:58.807 "serial_number": "SPDK00000000000002", 00:11:58.807 "model_number": "SPDK bdev Controller", 00:11:58.807 "max_namespaces": 32, 00:11:58.807 "min_cntlid": 1, 00:11:58.807 "max_cntlid": 65519, 00:11:58.807 "namespaces": [ 00:11:58.807 { 00:11:58.807 "nsid": 1, 00:11:58.807 "bdev_name": "Null2", 00:11:58.807 "name": "Null2", 00:11:58.807 "nguid": "06A979E958FD4A52A11C5EDDFFB7136C", 00:11:58.807 "uuid": "06a979e9-58fd-4a52-a11c-5eddffb7136c" 00:11:58.807 } 00:11:58.807 ] 00:11:58.807 }, 00:11:58.807 { 00:11:58.807 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:58.807 "subtype": "NVMe", 00:11:58.807 "listen_addresses": [ 00:11:58.807 { 00:11:58.807 "trtype": "TCP", 00:11:58.807 "adrfam": "IPv4", 00:11:58.807 "traddr": "10.0.0.2", 00:11:58.807 "trsvcid": "4420" 00:11:58.807 } 00:11:58.807 ], 00:11:58.807 "allow_any_host": true, 00:11:58.807 "hosts": [], 00:11:58.807 "serial_number": "SPDK00000000000003", 00:11:58.807 "model_number": "SPDK bdev Controller", 00:11:58.807 "max_namespaces": 32, 00:11:58.807 "min_cntlid": 1, 00:11:58.807 "max_cntlid": 65519, 00:11:58.807 "namespaces": [ 00:11:58.807 { 00:11:58.807 "nsid": 1, 00:11:58.807 "bdev_name": "Null3", 00:11:58.807 "name": "Null3", 00:11:58.807 "nguid": "3F7F0D9F3FB2481B9654462EC6A50D64", 00:11:58.807 "uuid": "3f7f0d9f-3fb2-481b-9654-462ec6a50d64" 00:11:58.807 } 00:11:58.807 ] 00:11:58.807 }, 00:11:58.807 { 00:11:58.807 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:58.807 "subtype": "NVMe", 00:11:58.807 "listen_addresses": [ 00:11:58.807 { 00:11:58.807 "trtype": "TCP", 00:11:58.807 "adrfam": "IPv4", 00:11:58.807 "traddr": "10.0.0.2", 00:11:58.807 "trsvcid": "4420" 00:11:58.807 } 00:11:58.807 ], 00:11:58.807 "allow_any_host": true, 00:11:58.807 "hosts": [], 00:11:58.807 "serial_number": "SPDK00000000000004", 00:11:58.807 "model_number": "SPDK bdev Controller", 00:11:58.807 "max_namespaces": 32, 00:11:58.807 "min_cntlid": 1, 00:11:58.807 "max_cntlid": 65519, 00:11:58.807 "namespaces": [ 00:11:58.807 { 00:11:58.807 "nsid": 1, 00:11:58.807 "bdev_name": "Null4", 00:11:58.807 "name": "Null4", 00:11:58.807 "nguid": "66167803D0A14119B7DBA2B21CDCB5E8", 00:11:58.807 "uuid": "66167803-d0a1-4119-b7db-a2b21cdcb5e8" 00:11:58.807 } 00:11:58.807 ] 00:11:58.807 } 00:11:58.807 ] 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.807 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.066 12:20:20 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:59.066 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.067 rmmod nvme_tcp 00:11:59.067 rmmod nvme_fabrics 00:11:59.067 rmmod nvme_keyring 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1543945 ']' 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1543945 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1543945 ']' 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1543945 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1543945 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1543945' 00:11:59.067 killing process with pid 1543945 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1543945 00:11:59.067 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1543945 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.326 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.861 00:12:01.861 real 0m9.515s 00:12:01.861 user 0m5.835s 00:12:01.861 sys 0m4.829s 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:01.861 ************************************ 00:12:01.861 END TEST nvmf_target_discovery 00:12:01.861 ************************************ 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.861 ************************************ 00:12:01.861 START TEST nvmf_referrals 00:12:01.861 ************************************ 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:01.861 * Looking for test storage... 00:12:01.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:01.861 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:01.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.862 --rc genhtml_branch_coverage=1 00:12:01.862 --rc genhtml_function_coverage=1 00:12:01.862 --rc genhtml_legend=1 00:12:01.862 --rc geninfo_all_blocks=1 00:12:01.862 --rc geninfo_unexecuted_blocks=1 00:12:01.862 00:12:01.862 ' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:01.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.862 --rc genhtml_branch_coverage=1 00:12:01.862 --rc genhtml_function_coverage=1 00:12:01.862 --rc genhtml_legend=1 00:12:01.862 --rc geninfo_all_blocks=1 00:12:01.862 --rc geninfo_unexecuted_blocks=1 00:12:01.862 00:12:01.862 ' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:01.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.862 --rc genhtml_branch_coverage=1 00:12:01.862 --rc genhtml_function_coverage=1 00:12:01.862 --rc genhtml_legend=1 00:12:01.862 --rc geninfo_all_blocks=1 00:12:01.862 --rc geninfo_unexecuted_blocks=1 00:12:01.862 00:12:01.862 ' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:01.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.862 --rc genhtml_branch_coverage=1 00:12:01.862 --rc genhtml_function_coverage=1 00:12:01.862 --rc genhtml_legend=1 00:12:01.862 --rc geninfo_all_blocks=1 00:12:01.862 --rc geninfo_unexecuted_blocks=1 00:12:01.862 00:12:01.862 ' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.862 12:20:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:08.427 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:08.427 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:08.427 Found net devices under 0000:86:00.0: cvl_0_0 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:08.427 Found net devices under 0000:86:00.1: cvl_0_1 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.427 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:08.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:12:08.428 00:12:08.428 --- 10.0.0.2 ping statistics --- 00:12:08.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.428 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:12:08.428 00:12:08.428 --- 10.0.0.1 ping statistics --- 00:12:08.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.428 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1547605 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1547605 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1547605 ']' 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 [2024-12-10 12:20:29.639390] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:12:08.428 [2024-12-10 12:20:29.639434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.428 [2024-12-10 12:20:29.717934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.428 [2024-12-10 12:20:29.758217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.428 [2024-12-10 12:20:29.758255] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.428 [2024-12-10 12:20:29.758262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.428 [2024-12-10 12:20:29.758268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.428 [2024-12-10 12:20:29.758273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.428 [2024-12-10 12:20:29.759848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.428 [2024-12-10 12:20:29.759955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.428 [2024-12-10 12:20:29.760064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.428 [2024-12-10 12:20:29.760065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 [2024-12-10 12:20:29.910174] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 [2024-12-10 12:20:29.936350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.428 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.429 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:08.687 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:08.945 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:08.945 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:08.945 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:08.945 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:08.945 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:08.945 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:08.945 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:08.946 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.204 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:09.462 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:09.462 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:09.462 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:09.462 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:09.462 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.462 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:09.720 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:09.979 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:09.979 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:09.979 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:09.979 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:09.979 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:09.979 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:09.979 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:09.979 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:09.979 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:09.979 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:09.979 rmmod nvme_tcp 00:12:09.979 rmmod nvme_fabrics 00:12:09.979 rmmod nvme_keyring 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1547605 ']' 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1547605 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1547605 ']' 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1547605 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1547605 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1547605' 00:12:09.979 killing process with pid 1547605 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1547605 00:12:09.979 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1547605 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.238 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.142 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:12.401 00:12:12.401 real 0m10.813s 00:12:12.401 user 0m12.475s 00:12:12.401 sys 0m5.125s 00:12:12.401 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.401 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:12.401 ************************************ 00:12:12.401 END TEST nvmf_referrals 00:12:12.401 ************************************ 00:12:12.401 12:20:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:12.401 12:20:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:12.401 12:20:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.401 12:20:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.401 ************************************ 00:12:12.401 START TEST nvmf_connect_disconnect 00:12:12.401 ************************************ 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:12.402 * Looking for test storage... 00:12:12.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:12.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.402 --rc genhtml_branch_coverage=1 00:12:12.402 --rc genhtml_function_coverage=1 00:12:12.402 --rc genhtml_legend=1 00:12:12.402 --rc geninfo_all_blocks=1 00:12:12.402 --rc geninfo_unexecuted_blocks=1 00:12:12.402 00:12:12.402 ' 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:12.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.402 --rc genhtml_branch_coverage=1 00:12:12.402 --rc genhtml_function_coverage=1 00:12:12.402 --rc genhtml_legend=1 00:12:12.402 --rc geninfo_all_blocks=1 00:12:12.402 --rc geninfo_unexecuted_blocks=1 00:12:12.402 00:12:12.402 ' 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:12.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.402 --rc genhtml_branch_coverage=1 00:12:12.402 --rc genhtml_function_coverage=1 00:12:12.402 --rc genhtml_legend=1 00:12:12.402 --rc geninfo_all_blocks=1 00:12:12.402 --rc geninfo_unexecuted_blocks=1 00:12:12.402 00:12:12.402 ' 00:12:12.402 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:12.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:12.402 --rc genhtml_branch_coverage=1 00:12:12.402 --rc genhtml_function_coverage=1 00:12:12.402 --rc genhtml_legend=1 00:12:12.402 --rc geninfo_all_blocks=1 00:12:12.402 --rc geninfo_unexecuted_blocks=1 00:12:12.402 00:12:12.402 ' 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:12.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:12.662 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.230 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:19.231 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:19.231 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:19.231 Found net devices under 0000:86:00.0: cvl_0_0 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:19.231 Found net devices under 0000:86:00.1: cvl_0_1 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:19.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:12:19.231 00:12:19.231 --- 10.0.0.2 ping statistics --- 00:12:19.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.231 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:12:19.231 00:12:19.231 --- 10.0.0.1 ping statistics --- 00:12:19.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.231 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:19.231 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1551689 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1551689 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1551689 ']' 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.232 [2024-12-10 12:20:40.597849] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:12:19.232 [2024-12-10 12:20:40.597896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.232 [2024-12-10 12:20:40.680714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.232 [2024-12-10 12:20:40.721340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.232 [2024-12-10 12:20:40.721382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.232 [2024-12-10 12:20:40.721391] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.232 [2024-12-10 12:20:40.721397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.232 [2024-12-10 12:20:40.721406] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.232 [2024-12-10 12:20:40.722899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.232 [2024-12-10 12:20:40.723005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.232 [2024-12-10 12:20:40.723114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.232 [2024-12-10 12:20:40.723115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.232 [2024-12-10 12:20:40.868971] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:19.232 [2024-12-10 12:20:40.930202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:19.232 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:22.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.731 rmmod nvme_tcp 00:12:35.731 rmmod nvme_fabrics 00:12:35.731 rmmod nvme_keyring 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1551689 ']' 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1551689 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1551689 ']' 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1551689 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1551689 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.731 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1551689' 00:12:35.732 killing process with pid 1551689 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1551689 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1551689 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.732 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.636 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.636 00:12:37.636 real 0m25.287s 00:12:37.636 user 1m8.682s 00:12:37.636 sys 0m5.862s 00:12:37.636 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.636 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.636 ************************************ 00:12:37.636 END TEST nvmf_connect_disconnect 00:12:37.636 ************************************ 00:12:37.636 12:20:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:37.636 12:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.636 12:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.636 12:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.636 ************************************ 00:12:37.636 START TEST nvmf_multitarget 00:12:37.636 ************************************ 00:12:37.636 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:37.896 * Looking for test storage... 00:12:37.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:37.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.896 --rc genhtml_branch_coverage=1 00:12:37.896 --rc genhtml_function_coverage=1 00:12:37.896 --rc genhtml_legend=1 00:12:37.896 --rc geninfo_all_blocks=1 00:12:37.896 --rc geninfo_unexecuted_blocks=1 00:12:37.896 00:12:37.896 ' 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:37.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.896 --rc genhtml_branch_coverage=1 00:12:37.896 --rc genhtml_function_coverage=1 00:12:37.896 --rc genhtml_legend=1 00:12:37.896 --rc geninfo_all_blocks=1 00:12:37.896 --rc geninfo_unexecuted_blocks=1 00:12:37.896 00:12:37.896 ' 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:37.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.896 --rc genhtml_branch_coverage=1 00:12:37.896 --rc genhtml_function_coverage=1 00:12:37.896 --rc genhtml_legend=1 00:12:37.896 --rc geninfo_all_blocks=1 00:12:37.896 --rc geninfo_unexecuted_blocks=1 00:12:37.896 00:12:37.896 ' 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:37.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.896 --rc genhtml_branch_coverage=1 00:12:37.896 --rc genhtml_function_coverage=1 00:12:37.896 --rc genhtml_legend=1 00:12:37.896 --rc geninfo_all_blocks=1 00:12:37.896 --rc geninfo_unexecuted_blocks=1 00:12:37.896 00:12:37.896 ' 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.896 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.897 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.464 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:44.465 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:44.465 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:44.465 Found net devices under 0000:86:00.0: cvl_0_0 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:44.465 Found net devices under 0000:86:00.1: cvl_0_1 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:12:44.465 00:12:44.465 --- 10.0.0.2 ping statistics --- 00:12:44.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.465 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:12:44.465 00:12:44.465 --- 10.0.0.1 ping statistics --- 00:12:44.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.465 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1558507 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1558507 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1558507 ']' 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.465 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.465 [2024-12-10 12:21:05.994264] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:12:44.465 [2024-12-10 12:21:05.994316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.465 [2024-12-10 12:21:06.075366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.465 [2024-12-10 12:21:06.118222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.466 [2024-12-10 12:21:06.118259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.466 [2024-12-10 12:21:06.118266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.466 [2024-12-10 12:21:06.118272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.466 [2024-12-10 12:21:06.118277] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.466 [2024-12-10 12:21:06.119723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.466 [2024-12-10 12:21:06.123176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.466 [2024-12-10 12:21:06.123206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.466 [2024-12-10 12:21:06.123206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.724 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:44.724 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:44.724 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:44.724 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:44.724 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:44.724 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.724 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:44.724 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:44.724 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:44.982 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:44.982 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:44.982 "nvmf_tgt_1" 00:12:44.982 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:45.240 "nvmf_tgt_2" 00:12:45.240 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:45.240 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:45.240 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:45.240 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:45.499 true 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:45.499 true 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.499 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.499 rmmod nvme_tcp 00:12:45.757 rmmod nvme_fabrics 00:12:45.757 rmmod nvme_keyring 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1558507 ']' 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1558507 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1558507 ']' 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1558507 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1558507 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1558507' 00:12:45.757 killing process with pid 1558507 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1558507 00:12:45.757 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1558507 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.015 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.919 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:47.919 00:12:47.919 real 0m10.260s 00:12:47.919 user 0m9.874s 00:12:47.919 sys 0m4.973s 00:12:47.919 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.919 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:47.919 ************************************ 00:12:47.919 END TEST nvmf_multitarget 00:12:47.919 ************************************ 00:12:47.919 12:21:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:47.919 12:21:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.919 12:21:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.919 12:21:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.919 ************************************ 00:12:47.919 START TEST nvmf_rpc 00:12:47.919 ************************************ 00:12:47.919 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:48.179 * Looking for test storage... 00:12:48.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:48.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.179 --rc genhtml_branch_coverage=1 00:12:48.179 --rc genhtml_function_coverage=1 00:12:48.179 --rc genhtml_legend=1 00:12:48.179 --rc geninfo_all_blocks=1 00:12:48.179 --rc geninfo_unexecuted_blocks=1 00:12:48.179 00:12:48.179 ' 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:48.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.179 --rc genhtml_branch_coverage=1 00:12:48.179 --rc genhtml_function_coverage=1 00:12:48.179 --rc genhtml_legend=1 00:12:48.179 --rc geninfo_all_blocks=1 00:12:48.179 --rc geninfo_unexecuted_blocks=1 00:12:48.179 00:12:48.179 ' 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:48.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.179 --rc genhtml_branch_coverage=1 00:12:48.179 --rc genhtml_function_coverage=1 00:12:48.179 --rc genhtml_legend=1 00:12:48.179 --rc geninfo_all_blocks=1 00:12:48.179 --rc geninfo_unexecuted_blocks=1 00:12:48.179 00:12:48.179 ' 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:48.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:48.179 --rc genhtml_branch_coverage=1 00:12:48.179 --rc genhtml_function_coverage=1 00:12:48.179 --rc genhtml_legend=1 00:12:48.179 --rc geninfo_all_blocks=1 00:12:48.179 --rc geninfo_unexecuted_blocks=1 00:12:48.179 00:12:48.179 ' 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.179 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:48.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:48.180 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:54.752 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:54.752 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:54.752 Found net devices under 0000:86:00.0: cvl_0_0 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:54.752 Found net devices under 0000:86:00.1: cvl_0_1 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.752 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.753 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:12:54.753 00:12:54.753 --- 10.0.0.2 ping statistics --- 00:12:54.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.753 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.753 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.753 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:12:54.753 00:12:54.753 --- 10.0.0.1 ping statistics --- 00:12:54.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.753 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1562398 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1562398 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1562398 ']' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.753 [2024-12-10 12:21:16.353246] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:12:54.753 [2024-12-10 12:21:16.353299] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.753 [2024-12-10 12:21:16.431903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.753 [2024-12-10 12:21:16.474362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.753 [2024-12-10 12:21:16.474398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.753 [2024-12-10 12:21:16.474405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.753 [2024-12-10 12:21:16.474411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.753 [2024-12-10 12:21:16.474416] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.753 [2024-12-10 12:21:16.477175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.753 [2024-12-10 12:21:16.477203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.753 [2024-12-10 12:21:16.477312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.753 [2024-12-10 12:21:16.477312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:54.753 "tick_rate": 2300000000, 00:12:54.753 "poll_groups": [ 00:12:54.753 { 00:12:54.753 "name": "nvmf_tgt_poll_group_000", 00:12:54.753 "admin_qpairs": 0, 00:12:54.753 "io_qpairs": 0, 00:12:54.753 "current_admin_qpairs": 0, 00:12:54.753 "current_io_qpairs": 0, 00:12:54.753 "pending_bdev_io": 0, 00:12:54.753 "completed_nvme_io": 0, 00:12:54.753 "transports": [] 00:12:54.753 }, 00:12:54.753 { 00:12:54.753 "name": "nvmf_tgt_poll_group_001", 00:12:54.753 "admin_qpairs": 0, 00:12:54.753 "io_qpairs": 0, 00:12:54.753 "current_admin_qpairs": 0, 00:12:54.753 "current_io_qpairs": 0, 00:12:54.753 "pending_bdev_io": 0, 00:12:54.753 "completed_nvme_io": 0, 00:12:54.753 "transports": [] 00:12:54.753 }, 00:12:54.753 { 00:12:54.753 "name": "nvmf_tgt_poll_group_002", 00:12:54.753 "admin_qpairs": 0, 00:12:54.753 "io_qpairs": 0, 00:12:54.753 "current_admin_qpairs": 0, 00:12:54.753 "current_io_qpairs": 0, 00:12:54.753 "pending_bdev_io": 0, 00:12:54.753 "completed_nvme_io": 0, 00:12:54.753 "transports": [] 00:12:54.753 }, 00:12:54.753 { 00:12:54.753 "name": "nvmf_tgt_poll_group_003", 00:12:54.753 "admin_qpairs": 0, 00:12:54.753 "io_qpairs": 0, 00:12:54.753 "current_admin_qpairs": 0, 00:12:54.753 "current_io_qpairs": 0, 00:12:54.753 "pending_bdev_io": 0, 00:12:54.753 "completed_nvme_io": 0, 00:12:54.753 "transports": [] 00:12:54.753 } 00:12:54.753 ] 00:12:54.753 }' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.753 [2024-12-10 12:21:16.732021] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.753 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:54.753 "tick_rate": 2300000000, 00:12:54.753 "poll_groups": [ 00:12:54.753 { 00:12:54.753 "name": "nvmf_tgt_poll_group_000", 00:12:54.753 "admin_qpairs": 0, 00:12:54.753 "io_qpairs": 0, 00:12:54.753 "current_admin_qpairs": 0, 00:12:54.753 "current_io_qpairs": 0, 00:12:54.754 "pending_bdev_io": 0, 00:12:54.754 "completed_nvme_io": 0, 00:12:54.754 "transports": [ 00:12:54.754 { 00:12:54.754 "trtype": "TCP" 00:12:54.754 } 00:12:54.754 ] 00:12:54.754 }, 00:12:54.754 { 00:12:54.754 "name": "nvmf_tgt_poll_group_001", 00:12:54.754 "admin_qpairs": 0, 00:12:54.754 "io_qpairs": 0, 00:12:54.754 "current_admin_qpairs": 0, 00:12:54.754 "current_io_qpairs": 0, 00:12:54.754 "pending_bdev_io": 0, 00:12:54.754 "completed_nvme_io": 0, 00:12:54.754 "transports": [ 00:12:54.754 { 00:12:54.754 "trtype": "TCP" 00:12:54.754 } 00:12:54.754 ] 00:12:54.754 }, 00:12:54.754 { 00:12:54.754 "name": "nvmf_tgt_poll_group_002", 00:12:54.754 "admin_qpairs": 0, 00:12:54.754 "io_qpairs": 0, 00:12:54.754 "current_admin_qpairs": 0, 00:12:54.754 "current_io_qpairs": 0, 00:12:54.754 "pending_bdev_io": 0, 00:12:54.754 "completed_nvme_io": 0, 00:12:54.754 "transports": [ 00:12:54.754 { 00:12:54.754 "trtype": "TCP" 00:12:54.754 } 00:12:54.754 ] 00:12:54.754 }, 00:12:54.754 { 00:12:54.754 "name": "nvmf_tgt_poll_group_003", 00:12:54.754 "admin_qpairs": 0, 00:12:54.754 "io_qpairs": 0, 00:12:54.754 "current_admin_qpairs": 0, 00:12:54.754 "current_io_qpairs": 0, 00:12:54.754 "pending_bdev_io": 0, 00:12:54.754 "completed_nvme_io": 0, 00:12:54.754 "transports": [ 00:12:54.754 { 00:12:54.754 "trtype": "TCP" 00:12:54.754 } 00:12:54.754 ] 00:12:54.754 } 00:12:54.754 ] 00:12:54.754 }' 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.754 Malloc1 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.754 [2024-12-10 12:21:16.909294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:54.754 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:55.013 [2024-12-10 12:21:16.937808] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:55.013 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:55.013 could not add new controller: failed to write to nvme-fabrics device 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.013 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.390 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.390 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:56.390 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.390 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:56.390 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.293 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.293 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.293 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.293 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.293 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.293 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.294 [2024-12-10 12:21:20.310081] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:58.294 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:58.294 could not add new controller: failed to write to nvme-fabrics device 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.294 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.671 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.671 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:59.671 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.671 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:59.671 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.574 [2024-12-10 12:21:23.619794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.574 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.949 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.949 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:02.949 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.949 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:02.949 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.850 [2024-12-10 12:21:26.929883] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.850 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.226 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.226 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:06.226 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.226 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:06.226 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.127 [2024-12-10 12:21:30.229191] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.127 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:09.509 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:09.509 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:09.509 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.509 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:09.509 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.411 [2024-12-10 12:21:33.532402] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.411 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.787 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.787 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:12.787 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.787 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:12.787 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:14.689 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:14.689 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:14.689 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.689 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:14.689 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.689 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:14.689 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.947 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:14.947 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:14.947 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:14.947 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.948 [2024-12-10 12:21:36.934511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.948 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.323 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.323 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:16.323 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.324 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:16.324 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 [2024-12-10 12:21:40.247661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 [2024-12-10 12:21:40.295762] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 [2024-12-10 12:21:40.343917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.226 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.227 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.227 [2024-12-10 12:21:40.392089] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 [2024-12-10 12:21:40.440283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.486 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:18.487 "tick_rate": 2300000000, 00:13:18.487 "poll_groups": [ 00:13:18.487 { 00:13:18.487 "name": "nvmf_tgt_poll_group_000", 00:13:18.487 "admin_qpairs": 2, 00:13:18.487 "io_qpairs": 168, 00:13:18.487 "current_admin_qpairs": 0, 00:13:18.487 "current_io_qpairs": 0, 00:13:18.487 "pending_bdev_io": 0, 00:13:18.487 "completed_nvme_io": 268, 00:13:18.487 "transports": [ 00:13:18.487 { 00:13:18.487 "trtype": "TCP" 00:13:18.487 } 00:13:18.487 ] 00:13:18.487 }, 00:13:18.487 { 00:13:18.487 "name": "nvmf_tgt_poll_group_001", 00:13:18.487 "admin_qpairs": 2, 00:13:18.487 "io_qpairs": 168, 00:13:18.487 "current_admin_qpairs": 0, 00:13:18.487 "current_io_qpairs": 0, 00:13:18.487 "pending_bdev_io": 0, 00:13:18.487 "completed_nvme_io": 268, 00:13:18.487 "transports": [ 00:13:18.487 { 00:13:18.487 "trtype": "TCP" 00:13:18.487 } 00:13:18.487 ] 00:13:18.487 }, 00:13:18.487 { 00:13:18.487 "name": "nvmf_tgt_poll_group_002", 00:13:18.487 "admin_qpairs": 1, 00:13:18.487 "io_qpairs": 168, 00:13:18.487 "current_admin_qpairs": 0, 00:13:18.487 "current_io_qpairs": 0, 00:13:18.487 "pending_bdev_io": 0, 00:13:18.487 "completed_nvme_io": 268, 00:13:18.487 "transports": [ 00:13:18.487 { 00:13:18.487 "trtype": "TCP" 00:13:18.487 } 00:13:18.487 ] 00:13:18.487 }, 00:13:18.487 { 00:13:18.487 "name": "nvmf_tgt_poll_group_003", 00:13:18.487 "admin_qpairs": 2, 00:13:18.487 "io_qpairs": 168, 00:13:18.487 "current_admin_qpairs": 0, 00:13:18.487 "current_io_qpairs": 0, 00:13:18.487 "pending_bdev_io": 0, 00:13:18.487 "completed_nvme_io": 218, 00:13:18.487 "transports": [ 00:13:18.487 { 00:13:18.487 "trtype": "TCP" 00:13:18.487 } 00:13:18.487 ] 00:13:18.487 } 00:13:18.487 ] 00:13:18.487 }' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.487 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.487 rmmod nvme_tcp 00:13:18.487 rmmod nvme_fabrics 00:13:18.487 rmmod nvme_keyring 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1562398 ']' 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1562398 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1562398 ']' 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1562398 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1562398 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1562398' 00:13:18.746 killing process with pid 1562398 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1562398 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1562398 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:18.746 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:18.747 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:18.747 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:18.747 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.747 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:18.747 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.747 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.747 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.283 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:21.283 00:13:21.283 real 0m32.889s 00:13:21.283 user 1m39.119s 00:13:21.283 sys 0m6.406s 00:13:21.283 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.283 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.283 ************************************ 00:13:21.283 END TEST nvmf_rpc 00:13:21.283 ************************************ 00:13:21.283 12:21:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:21.283 12:21:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:21.283 12:21:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.283 12:21:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:21.283 ************************************ 00:13:21.283 START TEST nvmf_invalid 00:13:21.283 ************************************ 00:13:21.283 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:21.283 * Looking for test storage... 00:13:21.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.284 --rc genhtml_branch_coverage=1 00:13:21.284 --rc genhtml_function_coverage=1 00:13:21.284 --rc genhtml_legend=1 00:13:21.284 --rc geninfo_all_blocks=1 00:13:21.284 --rc geninfo_unexecuted_blocks=1 00:13:21.284 00:13:21.284 ' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.284 --rc genhtml_branch_coverage=1 00:13:21.284 --rc genhtml_function_coverage=1 00:13:21.284 --rc genhtml_legend=1 00:13:21.284 --rc geninfo_all_blocks=1 00:13:21.284 --rc geninfo_unexecuted_blocks=1 00:13:21.284 00:13:21.284 ' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.284 --rc genhtml_branch_coverage=1 00:13:21.284 --rc genhtml_function_coverage=1 00:13:21.284 --rc genhtml_legend=1 00:13:21.284 --rc geninfo_all_blocks=1 00:13:21.284 --rc geninfo_unexecuted_blocks=1 00:13:21.284 00:13:21.284 ' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:21.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:21.284 --rc genhtml_branch_coverage=1 00:13:21.284 --rc genhtml_function_coverage=1 00:13:21.284 --rc genhtml_legend=1 00:13:21.284 --rc geninfo_all_blocks=1 00:13:21.284 --rc geninfo_unexecuted_blocks=1 00:13:21.284 00:13:21.284 ' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:21.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:21.284 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:21.285 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:27.855 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:27.856 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:27.856 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:27.856 Found net devices under 0000:86:00.0: cvl_0_0 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:27.856 Found net devices under 0000:86:00.1: cvl_0_1 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:27.856 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:27.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:27.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:13:27.856 00:13:27.856 --- 10.0.0.2 ping statistics --- 00:13:27.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.856 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:27.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:27.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:13:27.856 00:13:27.856 --- 10.0.0.1 ping statistics --- 00:13:27.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:27.856 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1570161 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1570161 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1570161 ']' 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.856 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.856 [2024-12-10 12:21:49.351041] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:13:27.856 [2024-12-10 12:21:49.351087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.856 [2024-12-10 12:21:49.435743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.856 [2024-12-10 12:21:49.478622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.856 [2024-12-10 12:21:49.478657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.856 [2024-12-10 12:21:49.478664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.856 [2024-12-10 12:21:49.478670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.857 [2024-12-10 12:21:49.478675] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.857 [2024-12-10 12:21:49.480087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.857 [2024-12-10 12:21:49.480208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.857 [2024-12-10 12:21:49.480248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.857 [2024-12-10 12:21:49.480249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29483 00:13:27.857 [2024-12-10 12:21:49.783085] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:27.857 { 00:13:27.857 "nqn": "nqn.2016-06.io.spdk:cnode29483", 00:13:27.857 "tgt_name": "foobar", 00:13:27.857 "method": "nvmf_create_subsystem", 00:13:27.857 "req_id": 1 00:13:27.857 } 00:13:27.857 Got JSON-RPC error response 00:13:27.857 response: 00:13:27.857 { 00:13:27.857 "code": -32603, 00:13:27.857 "message": "Unable to find target foobar" 00:13:27.857 }' 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:27.857 { 00:13:27.857 "nqn": "nqn.2016-06.io.spdk:cnode29483", 00:13:27.857 "tgt_name": "foobar", 00:13:27.857 "method": "nvmf_create_subsystem", 00:13:27.857 "req_id": 1 00:13:27.857 } 00:13:27.857 Got JSON-RPC error response 00:13:27.857 response: 00:13:27.857 { 00:13:27.857 "code": -32603, 00:13:27.857 "message": "Unable to find target foobar" 00:13:27.857 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:27.857 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15709 00:13:27.857 [2024-12-10 12:21:49.987798] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15709: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:27.857 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:27.857 { 00:13:27.857 "nqn": "nqn.2016-06.io.spdk:cnode15709", 00:13:27.857 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:27.857 "method": "nvmf_create_subsystem", 00:13:27.857 "req_id": 1 00:13:27.857 } 00:13:27.857 Got JSON-RPC error response 00:13:27.857 response: 00:13:27.857 { 00:13:27.857 "code": -32602, 00:13:27.857 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:27.857 }' 00:13:27.857 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:27.857 { 00:13:27.857 "nqn": "nqn.2016-06.io.spdk:cnode15709", 00:13:27.857 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:27.857 "method": "nvmf_create_subsystem", 00:13:27.857 "req_id": 1 00:13:27.857 } 00:13:27.857 Got JSON-RPC error response 00:13:27.857 response: 00:13:27.857 { 00:13:27.857 "code": -32602, 00:13:27.857 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:27.857 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19604 00:13:28.116 [2024-12-10 12:21:50.200504] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19604: invalid model number 'SPDK_Controller' 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:28.116 { 00:13:28.116 "nqn": "nqn.2016-06.io.spdk:cnode19604", 00:13:28.116 "model_number": "SPDK_Controller\u001f", 00:13:28.116 "method": "nvmf_create_subsystem", 00:13:28.116 "req_id": 1 00:13:28.116 } 00:13:28.116 Got JSON-RPC error response 00:13:28.116 response: 00:13:28.116 { 00:13:28.116 "code": -32602, 00:13:28.116 "message": "Invalid MN SPDK_Controller\u001f" 00:13:28.116 }' 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:28.116 { 00:13:28.116 "nqn": "nqn.2016-06.io.spdk:cnode19604", 00:13:28.116 "model_number": "SPDK_Controller\u001f", 00:13:28.116 "method": "nvmf_create_subsystem", 00:13:28.116 "req_id": 1 00:13:28.116 } 00:13:28.116 Got JSON-RPC error response 00:13:28.116 response: 00:13:28.116 { 00:13:28.116 "code": -32602, 00:13:28.116 "message": "Invalid MN SPDK_Controller\u001f" 00:13:28.116 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.116 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.117 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.376 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:28.377 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:28.377 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:28.377 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.377 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.377 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ | == \- ]] 00:13:28.377 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '|x&0kR(1eaAEsB!7>Zxz!' 00:13:28.377 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -s '|x&0kR(1eaAEsB!7>Zxz!' nqn.2016-06.io.spdk:cnode4433 00:13:28.637 [2024-12-10 12:21:50.553715] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4433: invalid serial number '|x&0kR(1eaAEsB!7>Zxz!' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:28.637 { 00:13:28.637 "nqn": "nqn.2016-06.io.spdk:cnode4433", 00:13:28.637 "serial_number": "|x&0kR(1eaAEsB!7>Zxz!", 00:13:28.637 "method": "nvmf_create_subsystem", 00:13:28.637 "req_id": 1 00:13:28.637 } 00:13:28.637 Got JSON-RPC error response 00:13:28.637 response: 00:13:28.637 { 00:13:28.637 "code": -32602, 00:13:28.637 "message": "Invalid SN |x&0kR(1eaAEsB!7>Zxz!" 00:13:28.637 }' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:28.637 { 00:13:28.637 "nqn": "nqn.2016-06.io.spdk:cnode4433", 00:13:28.637 "serial_number": "|x&0kR(1eaAEsB!7>Zxz!", 00:13:28.637 "method": "nvmf_create_subsystem", 00:13:28.637 "req_id": 1 00:13:28.637 } 00:13:28.637 Got JSON-RPC error response 00:13:28.637 response: 00:13:28.637 { 00:13:28.637 "code": -32602, 00:13:28.637 "message": "Invalid SN |x&0kR(1eaAEsB!7>Zxz!" 00:13:28.637 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.637 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.638 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:28.639 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:28.897 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:28.898 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ C == \- ]] 00:13:28.898 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Cx)(jJ{TJI,N>_71/$Wfpg6_71/$Wfpg6_71/$Wfpg6_71/$Wfpg6_71/$Wfpg6 /dev/null' 00:13:30.976 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.571 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:33.571 00:13:33.571 real 0m12.088s 00:13:33.571 user 0m18.631s 00:13:33.572 sys 0m5.329s 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.572 ************************************ 00:13:33.572 END TEST nvmf_invalid 00:13:33.572 ************************************ 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:33.572 ************************************ 00:13:33.572 START TEST nvmf_connect_stress 00:13:33.572 ************************************ 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:33.572 * Looking for test storage... 00:13:33.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:33.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.572 --rc genhtml_branch_coverage=1 00:13:33.572 --rc genhtml_function_coverage=1 00:13:33.572 --rc genhtml_legend=1 00:13:33.572 --rc geninfo_all_blocks=1 00:13:33.572 --rc geninfo_unexecuted_blocks=1 00:13:33.572 00:13:33.572 ' 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:33.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.572 --rc genhtml_branch_coverage=1 00:13:33.572 --rc genhtml_function_coverage=1 00:13:33.572 --rc genhtml_legend=1 00:13:33.572 --rc geninfo_all_blocks=1 00:13:33.572 --rc geninfo_unexecuted_blocks=1 00:13:33.572 00:13:33.572 ' 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:33.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.572 --rc genhtml_branch_coverage=1 00:13:33.572 --rc genhtml_function_coverage=1 00:13:33.572 --rc genhtml_legend=1 00:13:33.572 --rc geninfo_all_blocks=1 00:13:33.572 --rc geninfo_unexecuted_blocks=1 00:13:33.572 00:13:33.572 ' 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:33.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.572 --rc genhtml_branch_coverage=1 00:13:33.572 --rc genhtml_function_coverage=1 00:13:33.572 --rc genhtml_legend=1 00:13:33.572 --rc geninfo_all_blocks=1 00:13:33.572 --rc geninfo_unexecuted_blocks=1 00:13:33.572 00:13:33.572 ' 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.572 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:33.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:33.573 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.147 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:40.148 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:40.148 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:40.148 Found net devices under 0000:86:00.0: cvl_0_0 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:40.148 Found net devices under 0000:86:00.1: cvl_0_1 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:40.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:13:40.148 00:13:40.148 --- 10.0.0.2 ping statistics --- 00:13:40.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.148 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:13:40.148 00:13:40.148 --- 10.0.0.1 ping statistics --- 00:13:40.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.148 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1574387 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1574387 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1574387 ']' 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.148 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.149 [2024-12-10 12:22:01.389967] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:13:40.149 [2024-12-10 12:22:01.390015] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.149 [2024-12-10 12:22:01.472008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:40.149 [2024-12-10 12:22:01.515436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.149 [2024-12-10 12:22:01.515468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.149 [2024-12-10 12:22:01.515477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.149 [2024-12-10 12:22:01.515483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.149 [2024-12-10 12:22:01.515489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.149 [2024-12-10 12:22:01.516769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.149 [2024-12-10 12:22:01.520180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.149 [2024-12-10 12:22:01.520182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.149 [2024-12-10 12:22:02.282221] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.149 [2024-12-10 12:22:02.302449] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.149 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.408 NULL1 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1574635 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.408 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.667 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.667 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:40.667 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.667 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.667 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.925 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.925 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:40.925 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.925 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.925 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.492 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.492 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:41.492 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.492 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.492 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.751 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.751 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:41.751 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.751 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.751 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.009 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.010 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:42.010 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.010 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.010 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.268 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.268 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:42.268 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.268 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.268 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.527 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.527 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:42.527 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.527 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.527 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.099 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.099 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:43.099 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.099 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.099 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.362 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.362 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:43.362 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.362 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.362 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.620 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.620 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:43.620 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.621 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.621 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.879 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.879 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:43.879 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.879 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.879 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.448 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.449 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:44.449 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.449 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.449 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.707 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.707 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:44.707 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.707 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.707 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.966 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.966 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:44.966 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.966 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.966 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.224 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.224 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:45.224 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.224 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.224 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.483 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.483 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:45.483 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.483 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.483 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.051 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.051 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:46.051 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.051 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.051 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.310 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.310 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:46.310 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.310 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.310 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.568 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.568 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:46.568 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.568 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.568 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.827 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.827 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:46.827 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.827 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.827 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.085 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.085 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:47.085 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.085 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.085 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.653 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.653 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:47.653 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.653 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.653 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.911 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.912 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:47.912 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.912 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.912 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.170 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.170 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:48.170 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.170 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.170 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.429 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.429 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:48.429 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.429 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.429 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.995 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.995 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:48.995 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.995 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.995 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.254 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.254 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:49.254 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.254 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.254 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.513 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.513 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:49.513 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.513 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.513 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.771 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.771 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:49.771 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.771 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.771 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.030 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.030 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:50.030 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.030 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.030 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.598 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1574635 00:13:50.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1574635) - No such process 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1574635 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:50.598 rmmod nvme_tcp 00:13:50.598 rmmod nvme_fabrics 00:13:50.598 rmmod nvme_keyring 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1574387 ']' 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1574387 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1574387 ']' 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1574387 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1574387 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1574387' 00:13:50.598 killing process with pid 1574387 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1574387 00:13:50.598 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1574387 00:13:50.857 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:50.857 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:50.857 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:50.857 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:50.857 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:50.857 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:50.858 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:50.858 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.858 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.858 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.858 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.858 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.765 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.765 00:13:52.765 real 0m19.658s 00:13:52.765 user 0m41.574s 00:13:52.765 sys 0m8.524s 00:13:52.765 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.765 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.765 ************************************ 00:13:52.765 END TEST nvmf_connect_stress 00:13:52.765 ************************************ 00:13:52.765 12:22:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:52.765 12:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.765 12:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.765 12:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.025 ************************************ 00:13:53.025 START TEST nvmf_fused_ordering 00:13:53.025 ************************************ 00:13:53.025 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:53.025 * Looking for test storage... 00:13:53.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:53.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.025 --rc genhtml_branch_coverage=1 00:13:53.025 --rc genhtml_function_coverage=1 00:13:53.025 --rc genhtml_legend=1 00:13:53.025 --rc geninfo_all_blocks=1 00:13:53.025 --rc geninfo_unexecuted_blocks=1 00:13:53.025 00:13:53.025 ' 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:53.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.025 --rc genhtml_branch_coverage=1 00:13:53.025 --rc genhtml_function_coverage=1 00:13:53.025 --rc genhtml_legend=1 00:13:53.025 --rc geninfo_all_blocks=1 00:13:53.025 --rc geninfo_unexecuted_blocks=1 00:13:53.025 00:13:53.025 ' 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:53.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.025 --rc genhtml_branch_coverage=1 00:13:53.025 --rc genhtml_function_coverage=1 00:13:53.025 --rc genhtml_legend=1 00:13:53.025 --rc geninfo_all_blocks=1 00:13:53.025 --rc geninfo_unexecuted_blocks=1 00:13:53.025 00:13:53.025 ' 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:53.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.025 --rc genhtml_branch_coverage=1 00:13:53.025 --rc genhtml_function_coverage=1 00:13:53.025 --rc genhtml_legend=1 00:13:53.025 --rc geninfo_all_blocks=1 00:13:53.025 --rc geninfo_unexecuted_blocks=1 00:13:53.025 00:13:53.025 ' 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:53.025 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:53.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:53.026 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:59.611 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:59.611 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:59.611 Found net devices under 0000:86:00.0: cvl_0_0 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:59.611 Found net devices under 0000:86:00.1: cvl_0_1 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.611 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:59.612 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:59.612 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.612 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.612 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.612 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.612 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:59.612 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:59.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:13:59.612 00:13:59.612 --- 10.0.0.2 ping statistics --- 00:13:59.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.612 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:13:59.612 00:13:59.612 --- 10.0.0.1 ping statistics --- 00:13:59.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.612 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1579788 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1579788 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1579788 ']' 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.612 [2024-12-10 12:22:21.232844] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:13:59.612 [2024-12-10 12:22:21.232893] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.612 [2024-12-10 12:22:21.312114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.612 [2024-12-10 12:22:21.352506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.612 [2024-12-10 12:22:21.352542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.612 [2024-12-10 12:22:21.352549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.612 [2024-12-10 12:22:21.352555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.612 [2024-12-10 12:22:21.352560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.612 [2024-12-10 12:22:21.353080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.612 [2024-12-10 12:22:21.485894] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.612 [2024-12-10 12:22:21.510091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.612 NULL1 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.612 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:59.612 [2024-12-10 12:22:21.569684] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:13:59.612 [2024-12-10 12:22:21.569729] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579923 ] 00:13:59.871 Attached to nqn.2016-06.io.spdk:cnode1 00:13:59.871 Namespace ID: 1 size: 1GB 00:13:59.871 fused_ordering(0) 00:13:59.871 fused_ordering(1) 00:13:59.871 fused_ordering(2) 00:13:59.871 fused_ordering(3) 00:13:59.871 fused_ordering(4) 00:13:59.871 fused_ordering(5) 00:13:59.871 fused_ordering(6) 00:13:59.871 fused_ordering(7) 00:13:59.872 fused_ordering(8) 00:13:59.872 fused_ordering(9) 00:13:59.872 fused_ordering(10) 00:13:59.872 fused_ordering(11) 00:13:59.872 fused_ordering(12) 00:13:59.872 fused_ordering(13) 00:13:59.872 fused_ordering(14) 00:13:59.872 fused_ordering(15) 00:13:59.872 fused_ordering(16) 00:13:59.872 fused_ordering(17) 00:13:59.872 fused_ordering(18) 00:13:59.872 fused_ordering(19) 00:13:59.872 fused_ordering(20) 00:13:59.872 fused_ordering(21) 00:13:59.872 fused_ordering(22) 00:13:59.872 fused_ordering(23) 00:13:59.872 fused_ordering(24) 00:13:59.872 fused_ordering(25) 00:13:59.872 fused_ordering(26) 00:13:59.872 fused_ordering(27) 00:13:59.872 fused_ordering(28) 00:13:59.872 fused_ordering(29) 00:13:59.872 fused_ordering(30) 00:13:59.872 fused_ordering(31) 00:13:59.872 fused_ordering(32) 00:13:59.872 fused_ordering(33) 00:13:59.872 fused_ordering(34) 00:13:59.872 fused_ordering(35) 00:13:59.872 fused_ordering(36) 00:13:59.872 fused_ordering(37) 00:13:59.872 fused_ordering(38) 00:13:59.872 fused_ordering(39) 00:13:59.872 fused_ordering(40) 00:13:59.872 fused_ordering(41) 00:13:59.872 fused_ordering(42) 00:13:59.872 fused_ordering(43) 00:13:59.872 fused_ordering(44) 00:13:59.872 fused_ordering(45) 00:13:59.872 fused_ordering(46) 00:13:59.872 fused_ordering(47) 00:13:59.872 fused_ordering(48) 00:13:59.872 fused_ordering(49) 00:13:59.872 fused_ordering(50) 00:13:59.872 fused_ordering(51) 00:13:59.872 fused_ordering(52) 00:13:59.872 fused_ordering(53) 00:13:59.872 fused_ordering(54) 00:13:59.872 fused_ordering(55) 00:13:59.872 fused_ordering(56) 00:13:59.872 fused_ordering(57) 00:13:59.872 fused_ordering(58) 00:13:59.872 fused_ordering(59) 00:13:59.872 fused_ordering(60) 00:13:59.872 fused_ordering(61) 00:13:59.872 fused_ordering(62) 00:13:59.872 fused_ordering(63) 00:13:59.872 fused_ordering(64) 00:13:59.872 fused_ordering(65) 00:13:59.872 fused_ordering(66) 00:13:59.872 fused_ordering(67) 00:13:59.872 fused_ordering(68) 00:13:59.872 fused_ordering(69) 00:13:59.872 fused_ordering(70) 00:13:59.872 fused_ordering(71) 00:13:59.872 fused_ordering(72) 00:13:59.872 fused_ordering(73) 00:13:59.872 fused_ordering(74) 00:13:59.872 fused_ordering(75) 00:13:59.872 fused_ordering(76) 00:13:59.872 fused_ordering(77) 00:13:59.872 fused_ordering(78) 00:13:59.872 fused_ordering(79) 00:13:59.872 fused_ordering(80) 00:13:59.872 fused_ordering(81) 00:13:59.872 fused_ordering(82) 00:13:59.872 fused_ordering(83) 00:13:59.872 fused_ordering(84) 00:13:59.872 fused_ordering(85) 00:13:59.872 fused_ordering(86) 00:13:59.872 fused_ordering(87) 00:13:59.872 fused_ordering(88) 00:13:59.872 fused_ordering(89) 00:13:59.872 fused_ordering(90) 00:13:59.872 fused_ordering(91) 00:13:59.872 fused_ordering(92) 00:13:59.872 fused_ordering(93) 00:13:59.872 fused_ordering(94) 00:13:59.872 fused_ordering(95) 00:13:59.872 fused_ordering(96) 00:13:59.872 fused_ordering(97) 00:13:59.872 fused_ordering(98) 00:13:59.872 fused_ordering(99) 00:13:59.872 fused_ordering(100) 00:13:59.872 fused_ordering(101) 00:13:59.872 fused_ordering(102) 00:13:59.872 fused_ordering(103) 00:13:59.872 fused_ordering(104) 00:13:59.872 fused_ordering(105) 00:13:59.872 fused_ordering(106) 00:13:59.872 fused_ordering(107) 00:13:59.872 fused_ordering(108) 00:13:59.872 fused_ordering(109) 00:13:59.872 fused_ordering(110) 00:13:59.872 fused_ordering(111) 00:13:59.872 fused_ordering(112) 00:13:59.872 fused_ordering(113) 00:13:59.872 fused_ordering(114) 00:13:59.872 fused_ordering(115) 00:13:59.872 fused_ordering(116) 00:13:59.872 fused_ordering(117) 00:13:59.872 fused_ordering(118) 00:13:59.872 fused_ordering(119) 00:13:59.872 fused_ordering(120) 00:13:59.872 fused_ordering(121) 00:13:59.872 fused_ordering(122) 00:13:59.872 fused_ordering(123) 00:13:59.872 fused_ordering(124) 00:13:59.872 fused_ordering(125) 00:13:59.872 fused_ordering(126) 00:13:59.872 fused_ordering(127) 00:13:59.872 fused_ordering(128) 00:13:59.872 fused_ordering(129) 00:13:59.872 fused_ordering(130) 00:13:59.872 fused_ordering(131) 00:13:59.872 fused_ordering(132) 00:13:59.872 fused_ordering(133) 00:13:59.872 fused_ordering(134) 00:13:59.872 fused_ordering(135) 00:13:59.872 fused_ordering(136) 00:13:59.872 fused_ordering(137) 00:13:59.872 fused_ordering(138) 00:13:59.872 fused_ordering(139) 00:13:59.872 fused_ordering(140) 00:13:59.872 fused_ordering(141) 00:13:59.872 fused_ordering(142) 00:13:59.872 fused_ordering(143) 00:13:59.872 fused_ordering(144) 00:13:59.872 fused_ordering(145) 00:13:59.872 fused_ordering(146) 00:13:59.872 fused_ordering(147) 00:13:59.872 fused_ordering(148) 00:13:59.872 fused_ordering(149) 00:13:59.872 fused_ordering(150) 00:13:59.872 fused_ordering(151) 00:13:59.872 fused_ordering(152) 00:13:59.872 fused_ordering(153) 00:13:59.872 fused_ordering(154) 00:13:59.872 fused_ordering(155) 00:13:59.872 fused_ordering(156) 00:13:59.872 fused_ordering(157) 00:13:59.872 fused_ordering(158) 00:13:59.872 fused_ordering(159) 00:13:59.872 fused_ordering(160) 00:13:59.872 fused_ordering(161) 00:13:59.872 fused_ordering(162) 00:13:59.872 fused_ordering(163) 00:13:59.872 fused_ordering(164) 00:13:59.872 fused_ordering(165) 00:13:59.872 fused_ordering(166) 00:13:59.872 fused_ordering(167) 00:13:59.872 fused_ordering(168) 00:13:59.872 fused_ordering(169) 00:13:59.872 fused_ordering(170) 00:13:59.872 fused_ordering(171) 00:13:59.872 fused_ordering(172) 00:13:59.872 fused_ordering(173) 00:13:59.873 fused_ordering(174) 00:13:59.873 fused_ordering(175) 00:13:59.873 fused_ordering(176) 00:13:59.873 fused_ordering(177) 00:13:59.873 fused_ordering(178) 00:13:59.873 fused_ordering(179) 00:13:59.873 fused_ordering(180) 00:13:59.873 fused_ordering(181) 00:13:59.873 fused_ordering(182) 00:13:59.873 fused_ordering(183) 00:13:59.873 fused_ordering(184) 00:13:59.873 fused_ordering(185) 00:13:59.873 fused_ordering(186) 00:13:59.873 fused_ordering(187) 00:13:59.873 fused_ordering(188) 00:13:59.873 fused_ordering(189) 00:13:59.873 fused_ordering(190) 00:13:59.873 fused_ordering(191) 00:13:59.873 fused_ordering(192) 00:13:59.873 fused_ordering(193) 00:13:59.873 fused_ordering(194) 00:13:59.873 fused_ordering(195) 00:13:59.873 fused_ordering(196) 00:13:59.873 fused_ordering(197) 00:13:59.873 fused_ordering(198) 00:13:59.873 fused_ordering(199) 00:13:59.873 fused_ordering(200) 00:13:59.873 fused_ordering(201) 00:13:59.873 fused_ordering(202) 00:13:59.873 fused_ordering(203) 00:13:59.873 fused_ordering(204) 00:13:59.873 fused_ordering(205) 00:14:00.132 fused_ordering(206) 00:14:00.132 fused_ordering(207) 00:14:00.132 fused_ordering(208) 00:14:00.132 fused_ordering(209) 00:14:00.132 fused_ordering(210) 00:14:00.132 fused_ordering(211) 00:14:00.132 fused_ordering(212) 00:14:00.132 fused_ordering(213) 00:14:00.132 fused_ordering(214) 00:14:00.132 fused_ordering(215) 00:14:00.132 fused_ordering(216) 00:14:00.132 fused_ordering(217) 00:14:00.132 fused_ordering(218) 00:14:00.132 fused_ordering(219) 00:14:00.132 fused_ordering(220) 00:14:00.132 fused_ordering(221) 00:14:00.132 fused_ordering(222) 00:14:00.132 fused_ordering(223) 00:14:00.132 fused_ordering(224) 00:14:00.132 fused_ordering(225) 00:14:00.132 fused_ordering(226) 00:14:00.132 fused_ordering(227) 00:14:00.132 fused_ordering(228) 00:14:00.132 fused_ordering(229) 00:14:00.132 fused_ordering(230) 00:14:00.132 fused_ordering(231) 00:14:00.132 fused_ordering(232) 00:14:00.132 fused_ordering(233) 00:14:00.132 fused_ordering(234) 00:14:00.132 fused_ordering(235) 00:14:00.132 fused_ordering(236) 00:14:00.132 fused_ordering(237) 00:14:00.132 fused_ordering(238) 00:14:00.132 fused_ordering(239) 00:14:00.132 fused_ordering(240) 00:14:00.132 fused_ordering(241) 00:14:00.132 fused_ordering(242) 00:14:00.132 fused_ordering(243) 00:14:00.132 fused_ordering(244) 00:14:00.132 fused_ordering(245) 00:14:00.132 fused_ordering(246) 00:14:00.132 fused_ordering(247) 00:14:00.132 fused_ordering(248) 00:14:00.132 fused_ordering(249) 00:14:00.132 fused_ordering(250) 00:14:00.132 fused_ordering(251) 00:14:00.132 fused_ordering(252) 00:14:00.132 fused_ordering(253) 00:14:00.132 fused_ordering(254) 00:14:00.132 fused_ordering(255) 00:14:00.132 fused_ordering(256) 00:14:00.132 fused_ordering(257) 00:14:00.132 fused_ordering(258) 00:14:00.132 fused_ordering(259) 00:14:00.132 fused_ordering(260) 00:14:00.132 fused_ordering(261) 00:14:00.132 fused_ordering(262) 00:14:00.132 fused_ordering(263) 00:14:00.132 fused_ordering(264) 00:14:00.132 fused_ordering(265) 00:14:00.132 fused_ordering(266) 00:14:00.132 fused_ordering(267) 00:14:00.132 fused_ordering(268) 00:14:00.132 fused_ordering(269) 00:14:00.132 fused_ordering(270) 00:14:00.132 fused_ordering(271) 00:14:00.132 fused_ordering(272) 00:14:00.132 fused_ordering(273) 00:14:00.132 fused_ordering(274) 00:14:00.132 fused_ordering(275) 00:14:00.132 fused_ordering(276) 00:14:00.132 fused_ordering(277) 00:14:00.132 fused_ordering(278) 00:14:00.132 fused_ordering(279) 00:14:00.132 fused_ordering(280) 00:14:00.132 fused_ordering(281) 00:14:00.132 fused_ordering(282) 00:14:00.132 fused_ordering(283) 00:14:00.132 fused_ordering(284) 00:14:00.132 fused_ordering(285) 00:14:00.132 fused_ordering(286) 00:14:00.132 fused_ordering(287) 00:14:00.132 fused_ordering(288) 00:14:00.132 fused_ordering(289) 00:14:00.132 fused_ordering(290) 00:14:00.132 fused_ordering(291) 00:14:00.132 fused_ordering(292) 00:14:00.132 fused_ordering(293) 00:14:00.132 fused_ordering(294) 00:14:00.132 fused_ordering(295) 00:14:00.132 fused_ordering(296) 00:14:00.132 fused_ordering(297) 00:14:00.132 fused_ordering(298) 00:14:00.132 fused_ordering(299) 00:14:00.132 fused_ordering(300) 00:14:00.132 fused_ordering(301) 00:14:00.132 fused_ordering(302) 00:14:00.132 fused_ordering(303) 00:14:00.132 fused_ordering(304) 00:14:00.132 fused_ordering(305) 00:14:00.132 fused_ordering(306) 00:14:00.132 fused_ordering(307) 00:14:00.132 fused_ordering(308) 00:14:00.132 fused_ordering(309) 00:14:00.132 fused_ordering(310) 00:14:00.132 fused_ordering(311) 00:14:00.132 fused_ordering(312) 00:14:00.132 fused_ordering(313) 00:14:00.132 fused_ordering(314) 00:14:00.132 fused_ordering(315) 00:14:00.132 fused_ordering(316) 00:14:00.132 fused_ordering(317) 00:14:00.132 fused_ordering(318) 00:14:00.132 fused_ordering(319) 00:14:00.132 fused_ordering(320) 00:14:00.132 fused_ordering(321) 00:14:00.132 fused_ordering(322) 00:14:00.132 fused_ordering(323) 00:14:00.132 fused_ordering(324) 00:14:00.132 fused_ordering(325) 00:14:00.132 fused_ordering(326) 00:14:00.132 fused_ordering(327) 00:14:00.132 fused_ordering(328) 00:14:00.132 fused_ordering(329) 00:14:00.132 fused_ordering(330) 00:14:00.132 fused_ordering(331) 00:14:00.132 fused_ordering(332) 00:14:00.132 fused_ordering(333) 00:14:00.132 fused_ordering(334) 00:14:00.132 fused_ordering(335) 00:14:00.132 fused_ordering(336) 00:14:00.132 fused_ordering(337) 00:14:00.132 fused_ordering(338) 00:14:00.132 fused_ordering(339) 00:14:00.132 fused_ordering(340) 00:14:00.132 fused_ordering(341) 00:14:00.132 fused_ordering(342) 00:14:00.132 fused_ordering(343) 00:14:00.132 fused_ordering(344) 00:14:00.132 fused_ordering(345) 00:14:00.132 fused_ordering(346) 00:14:00.132 fused_ordering(347) 00:14:00.132 fused_ordering(348) 00:14:00.132 fused_ordering(349) 00:14:00.132 fused_ordering(350) 00:14:00.132 fused_ordering(351) 00:14:00.132 fused_ordering(352) 00:14:00.132 fused_ordering(353) 00:14:00.132 fused_ordering(354) 00:14:00.132 fused_ordering(355) 00:14:00.132 fused_ordering(356) 00:14:00.132 fused_ordering(357) 00:14:00.132 fused_ordering(358) 00:14:00.132 fused_ordering(359) 00:14:00.132 fused_ordering(360) 00:14:00.132 fused_ordering(361) 00:14:00.132 fused_ordering(362) 00:14:00.132 fused_ordering(363) 00:14:00.132 fused_ordering(364) 00:14:00.132 fused_ordering(365) 00:14:00.132 fused_ordering(366) 00:14:00.132 fused_ordering(367) 00:14:00.132 fused_ordering(368) 00:14:00.132 fused_ordering(369) 00:14:00.132 fused_ordering(370) 00:14:00.132 fused_ordering(371) 00:14:00.132 fused_ordering(372) 00:14:00.132 fused_ordering(373) 00:14:00.132 fused_ordering(374) 00:14:00.132 fused_ordering(375) 00:14:00.132 fused_ordering(376) 00:14:00.132 fused_ordering(377) 00:14:00.132 fused_ordering(378) 00:14:00.132 fused_ordering(379) 00:14:00.132 fused_ordering(380) 00:14:00.132 fused_ordering(381) 00:14:00.132 fused_ordering(382) 00:14:00.132 fused_ordering(383) 00:14:00.132 fused_ordering(384) 00:14:00.132 fused_ordering(385) 00:14:00.132 fused_ordering(386) 00:14:00.132 fused_ordering(387) 00:14:00.132 fused_ordering(388) 00:14:00.132 fused_ordering(389) 00:14:00.132 fused_ordering(390) 00:14:00.132 fused_ordering(391) 00:14:00.132 fused_ordering(392) 00:14:00.132 fused_ordering(393) 00:14:00.132 fused_ordering(394) 00:14:00.132 fused_ordering(395) 00:14:00.132 fused_ordering(396) 00:14:00.132 fused_ordering(397) 00:14:00.132 fused_ordering(398) 00:14:00.132 fused_ordering(399) 00:14:00.132 fused_ordering(400) 00:14:00.132 fused_ordering(401) 00:14:00.132 fused_ordering(402) 00:14:00.132 fused_ordering(403) 00:14:00.132 fused_ordering(404) 00:14:00.132 fused_ordering(405) 00:14:00.132 fused_ordering(406) 00:14:00.133 fused_ordering(407) 00:14:00.133 fused_ordering(408) 00:14:00.133 fused_ordering(409) 00:14:00.133 fused_ordering(410) 00:14:00.700 fused_ordering(411) 00:14:00.700 fused_ordering(412) 00:14:00.700 fused_ordering(413) 00:14:00.700 fused_ordering(414) 00:14:00.700 fused_ordering(415) 00:14:00.700 fused_ordering(416) 00:14:00.700 fused_ordering(417) 00:14:00.700 fused_ordering(418) 00:14:00.700 fused_ordering(419) 00:14:00.700 fused_ordering(420) 00:14:00.700 fused_ordering(421) 00:14:00.700 fused_ordering(422) 00:14:00.700 fused_ordering(423) 00:14:00.700 fused_ordering(424) 00:14:00.700 fused_ordering(425) 00:14:00.700 fused_ordering(426) 00:14:00.700 fused_ordering(427) 00:14:00.700 fused_ordering(428) 00:14:00.700 fused_ordering(429) 00:14:00.700 fused_ordering(430) 00:14:00.700 fused_ordering(431) 00:14:00.700 fused_ordering(432) 00:14:00.700 fused_ordering(433) 00:14:00.700 fused_ordering(434) 00:14:00.700 fused_ordering(435) 00:14:00.700 fused_ordering(436) 00:14:00.700 fused_ordering(437) 00:14:00.700 fused_ordering(438) 00:14:00.700 fused_ordering(439) 00:14:00.700 fused_ordering(440) 00:14:00.700 fused_ordering(441) 00:14:00.700 fused_ordering(442) 00:14:00.700 fused_ordering(443) 00:14:00.700 fused_ordering(444) 00:14:00.700 fused_ordering(445) 00:14:00.700 fused_ordering(446) 00:14:00.700 fused_ordering(447) 00:14:00.700 fused_ordering(448) 00:14:00.700 fused_ordering(449) 00:14:00.700 fused_ordering(450) 00:14:00.700 fused_ordering(451) 00:14:00.700 fused_ordering(452) 00:14:00.700 fused_ordering(453) 00:14:00.700 fused_ordering(454) 00:14:00.700 fused_ordering(455) 00:14:00.700 fused_ordering(456) 00:14:00.700 fused_ordering(457) 00:14:00.700 fused_ordering(458) 00:14:00.700 fused_ordering(459) 00:14:00.700 fused_ordering(460) 00:14:00.700 fused_ordering(461) 00:14:00.700 fused_ordering(462) 00:14:00.700 fused_ordering(463) 00:14:00.700 fused_ordering(464) 00:14:00.700 fused_ordering(465) 00:14:00.700 fused_ordering(466) 00:14:00.700 fused_ordering(467) 00:14:00.700 fused_ordering(468) 00:14:00.700 fused_ordering(469) 00:14:00.700 fused_ordering(470) 00:14:00.700 fused_ordering(471) 00:14:00.700 fused_ordering(472) 00:14:00.700 fused_ordering(473) 00:14:00.700 fused_ordering(474) 00:14:00.700 fused_ordering(475) 00:14:00.700 fused_ordering(476) 00:14:00.700 fused_ordering(477) 00:14:00.700 fused_ordering(478) 00:14:00.700 fused_ordering(479) 00:14:00.700 fused_ordering(480) 00:14:00.700 fused_ordering(481) 00:14:00.700 fused_ordering(482) 00:14:00.700 fused_ordering(483) 00:14:00.700 fused_ordering(484) 00:14:00.700 fused_ordering(485) 00:14:00.700 fused_ordering(486) 00:14:00.700 fused_ordering(487) 00:14:00.700 fused_ordering(488) 00:14:00.700 fused_ordering(489) 00:14:00.700 fused_ordering(490) 00:14:00.700 fused_ordering(491) 00:14:00.700 fused_ordering(492) 00:14:00.700 fused_ordering(493) 00:14:00.700 fused_ordering(494) 00:14:00.700 fused_ordering(495) 00:14:00.700 fused_ordering(496) 00:14:00.700 fused_ordering(497) 00:14:00.700 fused_ordering(498) 00:14:00.700 fused_ordering(499) 00:14:00.700 fused_ordering(500) 00:14:00.700 fused_ordering(501) 00:14:00.700 fused_ordering(502) 00:14:00.700 fused_ordering(503) 00:14:00.700 fused_ordering(504) 00:14:00.700 fused_ordering(505) 00:14:00.700 fused_ordering(506) 00:14:00.700 fused_ordering(507) 00:14:00.700 fused_ordering(508) 00:14:00.700 fused_ordering(509) 00:14:00.700 fused_ordering(510) 00:14:00.700 fused_ordering(511) 00:14:00.700 fused_ordering(512) 00:14:00.700 fused_ordering(513) 00:14:00.700 fused_ordering(514) 00:14:00.700 fused_ordering(515) 00:14:00.700 fused_ordering(516) 00:14:00.700 fused_ordering(517) 00:14:00.700 fused_ordering(518) 00:14:00.700 fused_ordering(519) 00:14:00.700 fused_ordering(520) 00:14:00.700 fused_ordering(521) 00:14:00.700 fused_ordering(522) 00:14:00.700 fused_ordering(523) 00:14:00.700 fused_ordering(524) 00:14:00.700 fused_ordering(525) 00:14:00.700 fused_ordering(526) 00:14:00.700 fused_ordering(527) 00:14:00.700 fused_ordering(528) 00:14:00.700 fused_ordering(529) 00:14:00.700 fused_ordering(530) 00:14:00.700 fused_ordering(531) 00:14:00.700 fused_ordering(532) 00:14:00.700 fused_ordering(533) 00:14:00.700 fused_ordering(534) 00:14:00.700 fused_ordering(535) 00:14:00.700 fused_ordering(536) 00:14:00.700 fused_ordering(537) 00:14:00.700 fused_ordering(538) 00:14:00.700 fused_ordering(539) 00:14:00.700 fused_ordering(540) 00:14:00.700 fused_ordering(541) 00:14:00.700 fused_ordering(542) 00:14:00.700 fused_ordering(543) 00:14:00.700 fused_ordering(544) 00:14:00.701 fused_ordering(545) 00:14:00.701 fused_ordering(546) 00:14:00.701 fused_ordering(547) 00:14:00.701 fused_ordering(548) 00:14:00.701 fused_ordering(549) 00:14:00.701 fused_ordering(550) 00:14:00.701 fused_ordering(551) 00:14:00.701 fused_ordering(552) 00:14:00.701 fused_ordering(553) 00:14:00.701 fused_ordering(554) 00:14:00.701 fused_ordering(555) 00:14:00.701 fused_ordering(556) 00:14:00.701 fused_ordering(557) 00:14:00.701 fused_ordering(558) 00:14:00.701 fused_ordering(559) 00:14:00.701 fused_ordering(560) 00:14:00.701 fused_ordering(561) 00:14:00.701 fused_ordering(562) 00:14:00.701 fused_ordering(563) 00:14:00.701 fused_ordering(564) 00:14:00.701 fused_ordering(565) 00:14:00.701 fused_ordering(566) 00:14:00.701 fused_ordering(567) 00:14:00.701 fused_ordering(568) 00:14:00.701 fused_ordering(569) 00:14:00.701 fused_ordering(570) 00:14:00.701 fused_ordering(571) 00:14:00.701 fused_ordering(572) 00:14:00.701 fused_ordering(573) 00:14:00.701 fused_ordering(574) 00:14:00.701 fused_ordering(575) 00:14:00.701 fused_ordering(576) 00:14:00.701 fused_ordering(577) 00:14:00.701 fused_ordering(578) 00:14:00.701 fused_ordering(579) 00:14:00.701 fused_ordering(580) 00:14:00.701 fused_ordering(581) 00:14:00.701 fused_ordering(582) 00:14:00.701 fused_ordering(583) 00:14:00.701 fused_ordering(584) 00:14:00.701 fused_ordering(585) 00:14:00.701 fused_ordering(586) 00:14:00.701 fused_ordering(587) 00:14:00.701 fused_ordering(588) 00:14:00.701 fused_ordering(589) 00:14:00.701 fused_ordering(590) 00:14:00.701 fused_ordering(591) 00:14:00.701 fused_ordering(592) 00:14:00.701 fused_ordering(593) 00:14:00.701 fused_ordering(594) 00:14:00.701 fused_ordering(595) 00:14:00.701 fused_ordering(596) 00:14:00.701 fused_ordering(597) 00:14:00.701 fused_ordering(598) 00:14:00.701 fused_ordering(599) 00:14:00.701 fused_ordering(600) 00:14:00.701 fused_ordering(601) 00:14:00.701 fused_ordering(602) 00:14:00.701 fused_ordering(603) 00:14:00.701 fused_ordering(604) 00:14:00.701 fused_ordering(605) 00:14:00.701 fused_ordering(606) 00:14:00.701 fused_ordering(607) 00:14:00.701 fused_ordering(608) 00:14:00.701 fused_ordering(609) 00:14:00.701 fused_ordering(610) 00:14:00.701 fused_ordering(611) 00:14:00.701 fused_ordering(612) 00:14:00.701 fused_ordering(613) 00:14:00.701 fused_ordering(614) 00:14:00.701 fused_ordering(615) 00:14:00.959 fused_ordering(616) 00:14:00.959 fused_ordering(617) 00:14:00.959 fused_ordering(618) 00:14:00.959 fused_ordering(619) 00:14:00.959 fused_ordering(620) 00:14:00.959 fused_ordering(621) 00:14:00.959 fused_ordering(622) 00:14:00.959 fused_ordering(623) 00:14:00.959 fused_ordering(624) 00:14:00.960 fused_ordering(625) 00:14:00.960 fused_ordering(626) 00:14:00.960 fused_ordering(627) 00:14:00.960 fused_ordering(628) 00:14:00.960 fused_ordering(629) 00:14:00.960 fused_ordering(630) 00:14:00.960 fused_ordering(631) 00:14:00.960 fused_ordering(632) 00:14:00.960 fused_ordering(633) 00:14:00.960 fused_ordering(634) 00:14:00.960 fused_ordering(635) 00:14:00.960 fused_ordering(636) 00:14:00.960 fused_ordering(637) 00:14:00.960 fused_ordering(638) 00:14:00.960 fused_ordering(639) 00:14:00.960 fused_ordering(640) 00:14:00.960 fused_ordering(641) 00:14:00.960 fused_ordering(642) 00:14:00.960 fused_ordering(643) 00:14:00.960 fused_ordering(644) 00:14:00.960 fused_ordering(645) 00:14:00.960 fused_ordering(646) 00:14:00.960 fused_ordering(647) 00:14:00.960 fused_ordering(648) 00:14:00.960 fused_ordering(649) 00:14:00.960 fused_ordering(650) 00:14:00.960 fused_ordering(651) 00:14:00.960 fused_ordering(652) 00:14:00.960 fused_ordering(653) 00:14:00.960 fused_ordering(654) 00:14:00.960 fused_ordering(655) 00:14:00.960 fused_ordering(656) 00:14:00.960 fused_ordering(657) 00:14:00.960 fused_ordering(658) 00:14:00.960 fused_ordering(659) 00:14:00.960 fused_ordering(660) 00:14:00.960 fused_ordering(661) 00:14:00.960 fused_ordering(662) 00:14:00.960 fused_ordering(663) 00:14:00.960 fused_ordering(664) 00:14:00.960 fused_ordering(665) 00:14:00.960 fused_ordering(666) 00:14:00.960 fused_ordering(667) 00:14:00.960 fused_ordering(668) 00:14:00.960 fused_ordering(669) 00:14:00.960 fused_ordering(670) 00:14:00.960 fused_ordering(671) 00:14:00.960 fused_ordering(672) 00:14:00.960 fused_ordering(673) 00:14:00.960 fused_ordering(674) 00:14:00.960 fused_ordering(675) 00:14:00.960 fused_ordering(676) 00:14:00.960 fused_ordering(677) 00:14:00.960 fused_ordering(678) 00:14:00.960 fused_ordering(679) 00:14:00.960 fused_ordering(680) 00:14:00.960 fused_ordering(681) 00:14:00.960 fused_ordering(682) 00:14:00.960 fused_ordering(683) 00:14:00.960 fused_ordering(684) 00:14:00.960 fused_ordering(685) 00:14:00.960 fused_ordering(686) 00:14:00.960 fused_ordering(687) 00:14:00.960 fused_ordering(688) 00:14:00.960 fused_ordering(689) 00:14:00.960 fused_ordering(690) 00:14:00.960 fused_ordering(691) 00:14:00.960 fused_ordering(692) 00:14:00.960 fused_ordering(693) 00:14:00.960 fused_ordering(694) 00:14:00.960 fused_ordering(695) 00:14:00.960 fused_ordering(696) 00:14:00.960 fused_ordering(697) 00:14:00.960 fused_ordering(698) 00:14:00.960 fused_ordering(699) 00:14:00.960 fused_ordering(700) 00:14:00.960 fused_ordering(701) 00:14:00.960 fused_ordering(702) 00:14:00.960 fused_ordering(703) 00:14:00.960 fused_ordering(704) 00:14:00.960 fused_ordering(705) 00:14:00.960 fused_ordering(706) 00:14:00.960 fused_ordering(707) 00:14:00.960 fused_ordering(708) 00:14:00.960 fused_ordering(709) 00:14:00.960 fused_ordering(710) 00:14:00.960 fused_ordering(711) 00:14:00.960 fused_ordering(712) 00:14:00.960 fused_ordering(713) 00:14:00.960 fused_ordering(714) 00:14:00.960 fused_ordering(715) 00:14:00.960 fused_ordering(716) 00:14:00.960 fused_ordering(717) 00:14:00.960 fused_ordering(718) 00:14:00.960 fused_ordering(719) 00:14:00.960 fused_ordering(720) 00:14:00.960 fused_ordering(721) 00:14:00.960 fused_ordering(722) 00:14:00.960 fused_ordering(723) 00:14:00.960 fused_ordering(724) 00:14:00.960 fused_ordering(725) 00:14:00.960 fused_ordering(726) 00:14:00.960 fused_ordering(727) 00:14:00.960 fused_ordering(728) 00:14:00.960 fused_ordering(729) 00:14:00.960 fused_ordering(730) 00:14:00.960 fused_ordering(731) 00:14:00.960 fused_ordering(732) 00:14:00.960 fused_ordering(733) 00:14:00.960 fused_ordering(734) 00:14:00.960 fused_ordering(735) 00:14:00.960 fused_ordering(736) 00:14:00.960 fused_ordering(737) 00:14:00.960 fused_ordering(738) 00:14:00.960 fused_ordering(739) 00:14:00.960 fused_ordering(740) 00:14:00.960 fused_ordering(741) 00:14:00.960 fused_ordering(742) 00:14:00.960 fused_ordering(743) 00:14:00.960 fused_ordering(744) 00:14:00.960 fused_ordering(745) 00:14:00.960 fused_ordering(746) 00:14:00.960 fused_ordering(747) 00:14:00.960 fused_ordering(748) 00:14:00.960 fused_ordering(749) 00:14:00.960 fused_ordering(750) 00:14:00.960 fused_ordering(751) 00:14:00.960 fused_ordering(752) 00:14:00.960 fused_ordering(753) 00:14:00.960 fused_ordering(754) 00:14:00.960 fused_ordering(755) 00:14:00.960 fused_ordering(756) 00:14:00.960 fused_ordering(757) 00:14:00.960 fused_ordering(758) 00:14:00.960 fused_ordering(759) 00:14:00.960 fused_ordering(760) 00:14:00.960 fused_ordering(761) 00:14:00.960 fused_ordering(762) 00:14:00.960 fused_ordering(763) 00:14:00.960 fused_ordering(764) 00:14:00.960 fused_ordering(765) 00:14:00.960 fused_ordering(766) 00:14:00.960 fused_ordering(767) 00:14:00.960 fused_ordering(768) 00:14:00.960 fused_ordering(769) 00:14:00.960 fused_ordering(770) 00:14:00.960 fused_ordering(771) 00:14:00.960 fused_ordering(772) 00:14:00.960 fused_ordering(773) 00:14:00.960 fused_ordering(774) 00:14:00.960 fused_ordering(775) 00:14:00.960 fused_ordering(776) 00:14:00.960 fused_ordering(777) 00:14:00.960 fused_ordering(778) 00:14:00.960 fused_ordering(779) 00:14:00.960 fused_ordering(780) 00:14:00.960 fused_ordering(781) 00:14:00.960 fused_ordering(782) 00:14:00.960 fused_ordering(783) 00:14:00.960 fused_ordering(784) 00:14:00.960 fused_ordering(785) 00:14:00.960 fused_ordering(786) 00:14:00.960 fused_ordering(787) 00:14:00.960 fused_ordering(788) 00:14:00.960 fused_ordering(789) 00:14:00.960 fused_ordering(790) 00:14:00.960 fused_ordering(791) 00:14:00.960 fused_ordering(792) 00:14:00.960 fused_ordering(793) 00:14:00.960 fused_ordering(794) 00:14:00.960 fused_ordering(795) 00:14:00.960 fused_ordering(796) 00:14:00.960 fused_ordering(797) 00:14:00.960 fused_ordering(798) 00:14:00.960 fused_ordering(799) 00:14:00.960 fused_ordering(800) 00:14:00.960 fused_ordering(801) 00:14:00.960 fused_ordering(802) 00:14:00.960 fused_ordering(803) 00:14:00.960 fused_ordering(804) 00:14:00.960 fused_ordering(805) 00:14:00.960 fused_ordering(806) 00:14:00.960 fused_ordering(807) 00:14:00.960 fused_ordering(808) 00:14:00.960 fused_ordering(809) 00:14:00.960 fused_ordering(810) 00:14:00.960 fused_ordering(811) 00:14:00.960 fused_ordering(812) 00:14:00.960 fused_ordering(813) 00:14:00.960 fused_ordering(814) 00:14:00.960 fused_ordering(815) 00:14:00.960 fused_ordering(816) 00:14:00.960 fused_ordering(817) 00:14:00.960 fused_ordering(818) 00:14:00.960 fused_ordering(819) 00:14:00.960 fused_ordering(820) 00:14:01.528 fused_ordering(821) 00:14:01.528 fused_ordering(822) 00:14:01.528 fused_ordering(823) 00:14:01.528 fused_ordering(824) 00:14:01.528 fused_ordering(825) 00:14:01.528 fused_ordering(826) 00:14:01.528 fused_ordering(827) 00:14:01.528 fused_ordering(828) 00:14:01.528 fused_ordering(829) 00:14:01.528 fused_ordering(830) 00:14:01.528 fused_ordering(831) 00:14:01.528 fused_ordering(832) 00:14:01.528 fused_ordering(833) 00:14:01.528 fused_ordering(834) 00:14:01.528 fused_ordering(835) 00:14:01.528 fused_ordering(836) 00:14:01.528 fused_ordering(837) 00:14:01.528 fused_ordering(838) 00:14:01.528 fused_ordering(839) 00:14:01.528 fused_ordering(840) 00:14:01.528 fused_ordering(841) 00:14:01.528 fused_ordering(842) 00:14:01.528 fused_ordering(843) 00:14:01.528 fused_ordering(844) 00:14:01.528 fused_ordering(845) 00:14:01.528 fused_ordering(846) 00:14:01.528 fused_ordering(847) 00:14:01.528 fused_ordering(848) 00:14:01.528 fused_ordering(849) 00:14:01.528 fused_ordering(850) 00:14:01.528 fused_ordering(851) 00:14:01.528 fused_ordering(852) 00:14:01.528 fused_ordering(853) 00:14:01.528 fused_ordering(854) 00:14:01.528 fused_ordering(855) 00:14:01.528 fused_ordering(856) 00:14:01.528 fused_ordering(857) 00:14:01.528 fused_ordering(858) 00:14:01.528 fused_ordering(859) 00:14:01.528 fused_ordering(860) 00:14:01.528 fused_ordering(861) 00:14:01.528 fused_ordering(862) 00:14:01.528 fused_ordering(863) 00:14:01.528 fused_ordering(864) 00:14:01.528 fused_ordering(865) 00:14:01.528 fused_ordering(866) 00:14:01.528 fused_ordering(867) 00:14:01.528 fused_ordering(868) 00:14:01.528 fused_ordering(869) 00:14:01.528 fused_ordering(870) 00:14:01.528 fused_ordering(871) 00:14:01.528 fused_ordering(872) 00:14:01.528 fused_ordering(873) 00:14:01.528 fused_ordering(874) 00:14:01.528 fused_ordering(875) 00:14:01.528 fused_ordering(876) 00:14:01.528 fused_ordering(877) 00:14:01.528 fused_ordering(878) 00:14:01.528 fused_ordering(879) 00:14:01.528 fused_ordering(880) 00:14:01.528 fused_ordering(881) 00:14:01.528 fused_ordering(882) 00:14:01.528 fused_ordering(883) 00:14:01.528 fused_ordering(884) 00:14:01.528 fused_ordering(885) 00:14:01.528 fused_ordering(886) 00:14:01.528 fused_ordering(887) 00:14:01.528 fused_ordering(888) 00:14:01.528 fused_ordering(889) 00:14:01.528 fused_ordering(890) 00:14:01.528 fused_ordering(891) 00:14:01.528 fused_ordering(892) 00:14:01.529 fused_ordering(893) 00:14:01.529 fused_ordering(894) 00:14:01.529 fused_ordering(895) 00:14:01.529 fused_ordering(896) 00:14:01.529 fused_ordering(897) 00:14:01.529 fused_ordering(898) 00:14:01.529 fused_ordering(899) 00:14:01.529 fused_ordering(900) 00:14:01.529 fused_ordering(901) 00:14:01.529 fused_ordering(902) 00:14:01.529 fused_ordering(903) 00:14:01.529 fused_ordering(904) 00:14:01.529 fused_ordering(905) 00:14:01.529 fused_ordering(906) 00:14:01.529 fused_ordering(907) 00:14:01.529 fused_ordering(908) 00:14:01.529 fused_ordering(909) 00:14:01.529 fused_ordering(910) 00:14:01.529 fused_ordering(911) 00:14:01.529 fused_ordering(912) 00:14:01.529 fused_ordering(913) 00:14:01.529 fused_ordering(914) 00:14:01.529 fused_ordering(915) 00:14:01.529 fused_ordering(916) 00:14:01.529 fused_ordering(917) 00:14:01.529 fused_ordering(918) 00:14:01.529 fused_ordering(919) 00:14:01.529 fused_ordering(920) 00:14:01.529 fused_ordering(921) 00:14:01.529 fused_ordering(922) 00:14:01.529 fused_ordering(923) 00:14:01.529 fused_ordering(924) 00:14:01.529 fused_ordering(925) 00:14:01.529 fused_ordering(926) 00:14:01.529 fused_ordering(927) 00:14:01.529 fused_ordering(928) 00:14:01.529 fused_ordering(929) 00:14:01.529 fused_ordering(930) 00:14:01.529 fused_ordering(931) 00:14:01.529 fused_ordering(932) 00:14:01.529 fused_ordering(933) 00:14:01.529 fused_ordering(934) 00:14:01.529 fused_ordering(935) 00:14:01.529 fused_ordering(936) 00:14:01.529 fused_ordering(937) 00:14:01.529 fused_ordering(938) 00:14:01.529 fused_ordering(939) 00:14:01.529 fused_ordering(940) 00:14:01.529 fused_ordering(941) 00:14:01.529 fused_ordering(942) 00:14:01.529 fused_ordering(943) 00:14:01.529 fused_ordering(944) 00:14:01.529 fused_ordering(945) 00:14:01.529 fused_ordering(946) 00:14:01.529 fused_ordering(947) 00:14:01.529 fused_ordering(948) 00:14:01.529 fused_ordering(949) 00:14:01.529 fused_ordering(950) 00:14:01.529 fused_ordering(951) 00:14:01.529 fused_ordering(952) 00:14:01.529 fused_ordering(953) 00:14:01.529 fused_ordering(954) 00:14:01.529 fused_ordering(955) 00:14:01.529 fused_ordering(956) 00:14:01.529 fused_ordering(957) 00:14:01.529 fused_ordering(958) 00:14:01.529 fused_ordering(959) 00:14:01.529 fused_ordering(960) 00:14:01.529 fused_ordering(961) 00:14:01.529 fused_ordering(962) 00:14:01.529 fused_ordering(963) 00:14:01.529 fused_ordering(964) 00:14:01.529 fused_ordering(965) 00:14:01.529 fused_ordering(966) 00:14:01.529 fused_ordering(967) 00:14:01.529 fused_ordering(968) 00:14:01.529 fused_ordering(969) 00:14:01.529 fused_ordering(970) 00:14:01.529 fused_ordering(971) 00:14:01.529 fused_ordering(972) 00:14:01.529 fused_ordering(973) 00:14:01.529 fused_ordering(974) 00:14:01.529 fused_ordering(975) 00:14:01.529 fused_ordering(976) 00:14:01.529 fused_ordering(977) 00:14:01.529 fused_ordering(978) 00:14:01.529 fused_ordering(979) 00:14:01.529 fused_ordering(980) 00:14:01.529 fused_ordering(981) 00:14:01.529 fused_ordering(982) 00:14:01.529 fused_ordering(983) 00:14:01.529 fused_ordering(984) 00:14:01.529 fused_ordering(985) 00:14:01.529 fused_ordering(986) 00:14:01.529 fused_ordering(987) 00:14:01.529 fused_ordering(988) 00:14:01.529 fused_ordering(989) 00:14:01.529 fused_ordering(990) 00:14:01.529 fused_ordering(991) 00:14:01.529 fused_ordering(992) 00:14:01.529 fused_ordering(993) 00:14:01.529 fused_ordering(994) 00:14:01.529 fused_ordering(995) 00:14:01.529 fused_ordering(996) 00:14:01.529 fused_ordering(997) 00:14:01.529 fused_ordering(998) 00:14:01.529 fused_ordering(999) 00:14:01.529 fused_ordering(1000) 00:14:01.529 fused_ordering(1001) 00:14:01.529 fused_ordering(1002) 00:14:01.529 fused_ordering(1003) 00:14:01.529 fused_ordering(1004) 00:14:01.529 fused_ordering(1005) 00:14:01.529 fused_ordering(1006) 00:14:01.529 fused_ordering(1007) 00:14:01.529 fused_ordering(1008) 00:14:01.529 fused_ordering(1009) 00:14:01.529 fused_ordering(1010) 00:14:01.529 fused_ordering(1011) 00:14:01.529 fused_ordering(1012) 00:14:01.529 fused_ordering(1013) 00:14:01.529 fused_ordering(1014) 00:14:01.529 fused_ordering(1015) 00:14:01.529 fused_ordering(1016) 00:14:01.529 fused_ordering(1017) 00:14:01.529 fused_ordering(1018) 00:14:01.529 fused_ordering(1019) 00:14:01.529 fused_ordering(1020) 00:14:01.529 fused_ordering(1021) 00:14:01.529 fused_ordering(1022) 00:14:01.529 fused_ordering(1023) 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:01.529 rmmod nvme_tcp 00:14:01.529 rmmod nvme_fabrics 00:14:01.529 rmmod nvme_keyring 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1579788 ']' 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1579788 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1579788 ']' 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1579788 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1579788 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1579788' 00:14:01.529 killing process with pid 1579788 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1579788 00:14:01.529 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1579788 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.789 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.693 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:03.693 00:14:03.693 real 0m10.894s 00:14:03.693 user 0m5.187s 00:14:03.693 sys 0m5.895s 00:14:03.693 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.693 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.693 ************************************ 00:14:03.693 END TEST nvmf_fused_ordering 00:14:03.693 ************************************ 00:14:03.952 12:22:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:03.952 12:22:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:03.952 12:22:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.952 12:22:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:03.952 ************************************ 00:14:03.952 START TEST nvmf_ns_masking 00:14:03.952 ************************************ 00:14:03.952 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:03.952 * Looking for test storage... 00:14:03.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:03.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.952 --rc genhtml_branch_coverage=1 00:14:03.952 --rc genhtml_function_coverage=1 00:14:03.952 --rc genhtml_legend=1 00:14:03.952 --rc geninfo_all_blocks=1 00:14:03.952 --rc geninfo_unexecuted_blocks=1 00:14:03.952 00:14:03.952 ' 00:14:03.952 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:03.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.953 --rc genhtml_branch_coverage=1 00:14:03.953 --rc genhtml_function_coverage=1 00:14:03.953 --rc genhtml_legend=1 00:14:03.953 --rc geninfo_all_blocks=1 00:14:03.953 --rc geninfo_unexecuted_blocks=1 00:14:03.953 00:14:03.953 ' 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:03.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.953 --rc genhtml_branch_coverage=1 00:14:03.953 --rc genhtml_function_coverage=1 00:14:03.953 --rc genhtml_legend=1 00:14:03.953 --rc geninfo_all_blocks=1 00:14:03.953 --rc geninfo_unexecuted_blocks=1 00:14:03.953 00:14:03.953 ' 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:03.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.953 --rc genhtml_branch_coverage=1 00:14:03.953 --rc genhtml_function_coverage=1 00:14:03.953 --rc genhtml_legend=1 00:14:03.953 --rc geninfo_all_blocks=1 00:14:03.953 --rc geninfo_unexecuted_blocks=1 00:14:03.953 00:14:03.953 ' 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:03.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:03.953 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=764b1574-851c-4b56-b9a4-a298763ea74f 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2c1dcf70-56cb-48a8-9b03-f85c0acb8bb5 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ccefa49f-a014-4677-b364-1cad29da07cd 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:04.212 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:04.213 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.213 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.213 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.213 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:04.213 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:04.213 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:04.213 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:10.783 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:10.783 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:10.783 Found net devices under 0000:86:00.0: cvl_0_0 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:10.783 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:10.784 Found net devices under 0000:86:00.1: cvl_0_1 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:10.784 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:10.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:14:10.784 00:14:10.784 --- 10.0.0.2 ping statistics --- 00:14:10.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.784 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:10.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:10.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:14:10.784 00:14:10.784 --- 10.0.0.1 ping statistics --- 00:14:10.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.784 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1583794 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1583794 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1583794 ']' 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.784 [2024-12-10 12:22:32.237499] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:14:10.784 [2024-12-10 12:22:32.237556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.784 [2024-12-10 12:22:32.318608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.784 [2024-12-10 12:22:32.359832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.784 [2024-12-10 12:22:32.359865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.784 [2024-12-10 12:22:32.359873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.784 [2024-12-10 12:22:32.359879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.784 [2024-12-10 12:22:32.359884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.784 [2024-12-10 12:22:32.360414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:10.784 [2024-12-10 12:22:32.657289] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:10.784 Malloc1 00:14:10.784 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:11.043 Malloc2 00:14:11.043 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:11.302 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:11.561 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:11.561 [2024-12-10 12:22:33.654424] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.561 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:11.561 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ccefa49f-a014-4677-b364-1cad29da07cd -a 10.0.0.2 -s 4420 -i 4 00:14:11.819 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:11.819 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:11.819 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:11.819 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:11.819 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.722 [ 0]:0x1 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.722 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:13.981 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=016a3d0e62e64c1199e625862b78832c 00:14:13.981 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 016a3d0e62e64c1199e625862b78832c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:13.981 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:13.981 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:13.981 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:13.981 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:13.981 [ 0]:0x1 00:14:13.981 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:13.981 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.239 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=016a3d0e62e64c1199e625862b78832c 00:14:14.239 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 016a3d0e62e64c1199e625862b78832c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.239 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:14.239 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:14.239 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:14.239 [ 1]:0x2 00:14:14.239 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:14.239 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:14.239 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6a8f2f23953249a4a072fff04d4cd5ca 00:14:14.240 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6a8f2f23953249a4a072fff04d4cd5ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:14.240 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:14.240 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:14.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.497 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.756 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:14.756 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:14.756 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ccefa49f-a014-4677-b364-1cad29da07cd -a 10.0.0.2 -s 4420 -i 4 00:14:15.014 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:15.014 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:15.014 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.014 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:15.014 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:15.014 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:16.915 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:16.915 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:16.915 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:17.174 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:17.175 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:17.175 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:17.175 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:17.175 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.175 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.175 [ 0]:0x2 00:14:17.175 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.175 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.175 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6a8f2f23953249a4a072fff04d4cd5ca 00:14:17.175 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6a8f2f23953249a4a072fff04d4cd5ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.175 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.433 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:17.433 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.433 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.433 [ 0]:0x1 00:14:17.433 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.433 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.433 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=016a3d0e62e64c1199e625862b78832c 00:14:17.433 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 016a3d0e62e64c1199e625862b78832c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.433 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:17.434 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.434 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.434 [ 1]:0x2 00:14:17.434 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.434 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.434 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6a8f2f23953249a4a072fff04d4cd5ca 00:14:17.434 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6a8f2f23953249a4a072fff04d4cd5ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.434 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:17.693 [ 0]:0x2 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6a8f2f23953249a4a072fff04d4cd5ca 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6a8f2f23953249a4a072fff04d4cd5ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:17.693 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.952 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:17.952 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:17.952 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ccefa49f-a014-4677-b364-1cad29da07cd -a 10.0.0.2 -s 4420 -i 4 00:14:18.210 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:18.211 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:18.211 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.211 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:18.211 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:18.211 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.744 [ 0]:0x1 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=016a3d0e62e64c1199e625862b78832c 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 016a3d0e62e64c1199e625862b78832c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.744 [ 1]:0x2 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6a8f2f23953249a4a072fff04d4cd5ca 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6a8f2f23953249a4a072fff04d4cd5ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.744 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:20.745 [ 0]:0x2 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6a8f2f23953249a4a072fff04d4cd5ca 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6a8f2f23953249a4a072fff04d4cd5ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:14:20.745 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:21.004 [2024-12-10 12:22:42.984936] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:21.004 request: 00:14:21.004 { 00:14:21.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.004 "nsid": 2, 00:14:21.004 "host": "nqn.2016-06.io.spdk:host1", 00:14:21.004 "method": "nvmf_ns_remove_host", 00:14:21.004 "req_id": 1 00:14:21.004 } 00:14:21.004 Got JSON-RPC error response 00:14:21.004 response: 00:14:21.004 { 00:14:21.004 "code": -32602, 00:14:21.004 "message": "Invalid parameters" 00:14:21.004 } 00:14:21.004 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:21.004 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.004 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.004 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.004 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:21.004 [ 0]:0x2 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6a8f2f23953249a4a072fff04d4cd5ca 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6a8f2f23953249a4a072fff04d4cd5ca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1585785 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1585785 /var/tmp/host.sock 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1585785 ']' 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:21.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.004 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:21.263 [2024-12-10 12:22:43.211079] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:14:21.263 [2024-12-10 12:22:43.211124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1585785 ] 00:14:21.263 [2024-12-10 12:22:43.284815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.263 [2024-12-10 12:22:43.324585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.522 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.522 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:21.522 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.781 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.781 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 764b1574-851c-4b56-b9a4-a298763ea74f 00:14:21.781 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:21.781 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 764B1574851C4B56B9A4A298763EA74F -i 00:14:22.040 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2c1dcf70-56cb-48a8-9b03-f85c0acb8bb5 00:14:22.040 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:22.040 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2C1DCF7056CB48A89B03F85C0ACB8BB5 -i 00:14:22.298 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.572 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:22.855 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:22.855 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:23.129 nvme0n1 00:14:23.129 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:23.129 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:23.388 nvme1n2 00:14:23.388 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:23.388 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:23.388 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:23.388 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:23.388 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:23.647 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:23.647 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:23.647 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:23.647 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:23.906 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 764b1574-851c-4b56-b9a4-a298763ea74f == \7\6\4\b\1\5\7\4\-\8\5\1\c\-\4\b\5\6\-\b\9\a\4\-\a\2\9\8\7\6\3\e\a\7\4\f ]] 00:14:23.906 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:23.906 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:23.906 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:24.165 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2c1dcf70-56cb-48a8-9b03-f85c0acb8bb5 == \2\c\1\d\c\f\7\0\-\5\6\c\b\-\4\8\a\8\-\9\b\0\3\-\f\8\5\c\0\a\c\b\8\b\b\5 ]] 00:14:24.165 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.165 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 764b1574-851c-4b56-b9a4-a298763ea74f 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 764B1574851C4B56B9A4A298763EA74F 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 764B1574851C4B56B9A4A298763EA74F 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:14:24.424 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 764B1574851C4B56B9A4A298763EA74F 00:14:24.683 [2024-12-10 12:22:46.691151] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:24.683 [2024-12-10 12:22:46.691187] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:24.683 [2024-12-10 12:22:46.691195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:24.683 request: 00:14:24.683 { 00:14:24.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:24.683 "namespace": { 00:14:24.683 "bdev_name": "invalid", 00:14:24.683 "nsid": 1, 00:14:24.683 "nguid": "764B1574851C4B56B9A4A298763EA74F", 00:14:24.683 "no_auto_visible": false, 00:14:24.683 "hide_metadata": false 00:14:24.683 }, 00:14:24.683 "method": "nvmf_subsystem_add_ns", 00:14:24.683 "req_id": 1 00:14:24.683 } 00:14:24.683 Got JSON-RPC error response 00:14:24.683 response: 00:14:24.683 { 00:14:24.683 "code": -32602, 00:14:24.683 "message": "Invalid parameters" 00:14:24.683 } 00:14:24.683 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:24.683 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:24.683 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:24.683 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:24.683 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 764b1574-851c-4b56-b9a4-a298763ea74f 00:14:24.683 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:24.683 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 764B1574851C4B56B9A4A298763EA74F -i 00:14:24.942 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:26.847 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:26.847 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:26.847 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1585785 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1585785 ']' 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1585785 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1585785 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1585785' 00:14:27.106 killing process with pid 1585785 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1585785 00:14:27.106 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1585785 00:14:27.365 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:27.624 rmmod nvme_tcp 00:14:27.624 rmmod nvme_fabrics 00:14:27.624 rmmod nvme_keyring 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1583794 ']' 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1583794 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1583794 ']' 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1583794 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.624 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1583794 00:14:27.883 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.883 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.883 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1583794' 00:14:27.883 killing process with pid 1583794 00:14:27.883 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1583794 00:14:27.883 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1583794 00:14:27.883 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:27.883 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:27.883 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:27.883 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:27.883 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:27.883 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:27.883 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:27.883 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:27.883 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:27.884 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.884 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.884 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.419 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.419 00:14:30.419 real 0m26.188s 00:14:30.420 user 0m31.312s 00:14:30.420 sys 0m7.122s 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.420 ************************************ 00:14:30.420 END TEST nvmf_ns_masking 00:14:30.420 ************************************ 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.420 ************************************ 00:14:30.420 START TEST nvmf_nvme_cli 00:14:30.420 ************************************ 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:30.420 * Looking for test storage... 00:14:30.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:30.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.420 --rc genhtml_branch_coverage=1 00:14:30.420 --rc genhtml_function_coverage=1 00:14:30.420 --rc genhtml_legend=1 00:14:30.420 --rc geninfo_all_blocks=1 00:14:30.420 --rc geninfo_unexecuted_blocks=1 00:14:30.420 00:14:30.420 ' 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:30.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.420 --rc genhtml_branch_coverage=1 00:14:30.420 --rc genhtml_function_coverage=1 00:14:30.420 --rc genhtml_legend=1 00:14:30.420 --rc geninfo_all_blocks=1 00:14:30.420 --rc geninfo_unexecuted_blocks=1 00:14:30.420 00:14:30.420 ' 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:30.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.420 --rc genhtml_branch_coverage=1 00:14:30.420 --rc genhtml_function_coverage=1 00:14:30.420 --rc genhtml_legend=1 00:14:30.420 --rc geninfo_all_blocks=1 00:14:30.420 --rc geninfo_unexecuted_blocks=1 00:14:30.420 00:14:30.420 ' 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:30.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.420 --rc genhtml_branch_coverage=1 00:14:30.420 --rc genhtml_function_coverage=1 00:14:30.420 --rc genhtml_legend=1 00:14:30.420 --rc geninfo_all_blocks=1 00:14:30.420 --rc geninfo_unexecuted_blocks=1 00:14:30.420 00:14:30.420 ' 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:30.420 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:30.421 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:36.989 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:36.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:36.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:36.990 Found net devices under 0000:86:00.0: cvl_0_0 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:36.990 Found net devices under 0000:86:00.1: cvl_0_1 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:36.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:14:36.990 00:14:36.990 --- 10.0.0.2 ping statistics --- 00:14:36.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.990 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:14:36.990 00:14:36.990 --- 10.0.0.1 ping statistics --- 00:14:36.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.990 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1590469 00:14:36.990 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1590469 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1590469 ']' 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 [2024-12-10 12:22:58.381350] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:14:36.991 [2024-12-10 12:22:58.381393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.991 [2024-12-10 12:22:58.458786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.991 [2024-12-10 12:22:58.501299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.991 [2024-12-10 12:22:58.501338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.991 [2024-12-10 12:22:58.501345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.991 [2024-12-10 12:22:58.501350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.991 [2024-12-10 12:22:58.501356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.991 [2024-12-10 12:22:58.502839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.991 [2024-12-10 12:22:58.502947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.991 [2024-12-10 12:22:58.503055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.991 [2024-12-10 12:22:58.503056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 [2024-12-10 12:22:58.641359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 Malloc0 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 Malloc1 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 [2024-12-10 12:22:58.734155] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:36.991 00:14:36.991 Discovery Log Number of Records 2, Generation counter 2 00:14:36.991 =====Discovery Log Entry 0====== 00:14:36.991 trtype: tcp 00:14:36.991 adrfam: ipv4 00:14:36.991 subtype: current discovery subsystem 00:14:36.991 treq: not required 00:14:36.991 portid: 0 00:14:36.991 trsvcid: 4420 00:14:36.991 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:36.991 traddr: 10.0.0.2 00:14:36.991 eflags: explicit discovery connections, duplicate discovery information 00:14:36.991 sectype: none 00:14:36.991 =====Discovery Log Entry 1====== 00:14:36.991 trtype: tcp 00:14:36.991 adrfam: ipv4 00:14:36.991 subtype: nvme subsystem 00:14:36.991 treq: not required 00:14:36.991 portid: 0 00:14:36.991 trsvcid: 4420 00:14:36.991 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:36.991 traddr: 10.0.0.2 00:14:36.991 eflags: none 00:14:36.991 sectype: none 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:36.991 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:37.926 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:37.926 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:37.926 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:37.926 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:37.926 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:37.926 12:23:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:40.456 /dev/nvme0n2 ]] 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:40.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:40.456 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:40.715 rmmod nvme_tcp 00:14:40.715 rmmod nvme_fabrics 00:14:40.715 rmmod nvme_keyring 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1590469 ']' 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1590469 00:14:40.715 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1590469 ']' 00:14:40.716 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1590469 00:14:40.716 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:40.716 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.716 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1590469 00:14:40.716 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:40.716 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:40.716 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1590469' 00:14:40.716 killing process with pid 1590469 00:14:40.716 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1590469 00:14:40.716 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1590469 00:14:40.974 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:40.974 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:40.974 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:40.974 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:40.974 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:40.975 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:40.975 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:40.975 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.975 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:40.975 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.975 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.975 12:23:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.879 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:42.879 00:14:42.879 real 0m12.876s 00:14:42.879 user 0m19.508s 00:14:42.879 sys 0m5.114s 00:14:42.879 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.879 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.879 ************************************ 00:14:42.879 END TEST nvmf_nvme_cli 00:14:42.879 ************************************ 00:14:43.141 12:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:43.141 12:23:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:43.141 12:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:43.141 12:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.141 12:23:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:43.141 ************************************ 00:14:43.141 START TEST nvmf_vfio_user 00:14:43.141 ************************************ 00:14:43.141 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:43.142 * Looking for test storage... 00:14:43.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.142 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:43.400 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.400 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.400 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.400 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:43.400 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.400 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:43.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.400 --rc genhtml_branch_coverage=1 00:14:43.400 --rc genhtml_function_coverage=1 00:14:43.400 --rc genhtml_legend=1 00:14:43.400 --rc geninfo_all_blocks=1 00:14:43.400 --rc geninfo_unexecuted_blocks=1 00:14:43.400 00:14:43.400 ' 00:14:43.400 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:43.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.400 --rc genhtml_branch_coverage=1 00:14:43.400 --rc genhtml_function_coverage=1 00:14:43.400 --rc genhtml_legend=1 00:14:43.400 --rc geninfo_all_blocks=1 00:14:43.400 --rc geninfo_unexecuted_blocks=1 00:14:43.400 00:14:43.401 ' 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:43.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.401 --rc genhtml_branch_coverage=1 00:14:43.401 --rc genhtml_function_coverage=1 00:14:43.401 --rc genhtml_legend=1 00:14:43.401 --rc geninfo_all_blocks=1 00:14:43.401 --rc geninfo_unexecuted_blocks=1 00:14:43.401 00:14:43.401 ' 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:43.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.401 --rc genhtml_branch_coverage=1 00:14:43.401 --rc genhtml_function_coverage=1 00:14:43.401 --rc genhtml_legend=1 00:14:43.401 --rc geninfo_all_blocks=1 00:14:43.401 --rc geninfo_unexecuted_blocks=1 00:14:43.401 00:14:43.401 ' 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1591736 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1591736' 00:14:43.401 Process pid: 1591736 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1591736 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1591736 ']' 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.401 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:43.401 [2024-12-10 12:23:05.393553] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:14:43.401 [2024-12-10 12:23:05.393600] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.401 [2024-12-10 12:23:05.470741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.401 [2024-12-10 12:23:05.510485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.401 [2024-12-10 12:23:05.510527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.401 [2024-12-10 12:23:05.510534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.401 [2024-12-10 12:23:05.510540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.401 [2024-12-10 12:23:05.510546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.401 [2024-12-10 12:23:05.512077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.401 [2024-12-10 12:23:05.512202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.401 [2024-12-10 12:23:05.512317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.401 [2024-12-10 12:23:05.512318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.660 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.660 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:43.660 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:44.595 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:44.854 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:44.854 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:44.854 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:44.854 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:44.854 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:45.112 Malloc1 00:14:45.112 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:45.371 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:45.371 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:45.629 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:45.629 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:45.629 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:45.888 Malloc2 00:14:45.888 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:46.146 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:46.405 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:46.405 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:46.405 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:46.405 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:46.405 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:46.405 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:46.405 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:46.405 [2024-12-10 12:23:08.538908] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:14:46.405 [2024-12-10 12:23:08.538941] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592297 ] 00:14:46.666 [2024-12-10 12:23:08.578075] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:46.666 [2024-12-10 12:23:08.582488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.666 [2024-12-10 12:23:08.582512] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6c3e478000 00:14:46.666 [2024-12-10 12:23:08.583488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.666 [2024-12-10 12:23:08.584491] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.666 [2024-12-10 12:23:08.585494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.666 [2024-12-10 12:23:08.586496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.666 [2024-12-10 12:23:08.587504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.666 [2024-12-10 12:23:08.588506] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.666 [2024-12-10 12:23:08.589510] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:46.666 [2024-12-10 12:23:08.590513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:46.666 [2024-12-10 12:23:08.591519] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:46.666 [2024-12-10 12:23:08.591529] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6c3e46d000 00:14:46.666 [2024-12-10 12:23:08.592473] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.666 [2024-12-10 12:23:08.606608] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:46.666 [2024-12-10 12:23:08.606638] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:46.666 [2024-12-10 12:23:08.609643] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:46.666 [2024-12-10 12:23:08.609682] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:46.666 [2024-12-10 12:23:08.609753] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:46.666 [2024-12-10 12:23:08.609769] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:46.666 [2024-12-10 12:23:08.609775] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:46.666 [2024-12-10 12:23:08.610636] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:46.666 [2024-12-10 12:23:08.610644] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:46.666 [2024-12-10 12:23:08.610651] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:46.666 [2024-12-10 12:23:08.611645] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:46.666 [2024-12-10 12:23:08.611653] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:46.666 [2024-12-10 12:23:08.611660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:46.666 [2024-12-10 12:23:08.612649] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:46.666 [2024-12-10 12:23:08.612657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:46.666 [2024-12-10 12:23:08.613654] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:46.666 [2024-12-10 12:23:08.613662] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:46.666 [2024-12-10 12:23:08.613667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:46.666 [2024-12-10 12:23:08.613673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:46.666 [2024-12-10 12:23:08.613780] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:46.666 [2024-12-10 12:23:08.613785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:46.666 [2024-12-10 12:23:08.613790] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:46.666 [2024-12-10 12:23:08.614665] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:46.666 [2024-12-10 12:23:08.615672] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:46.666 [2024-12-10 12:23:08.616681] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:46.666 [2024-12-10 12:23:08.617684] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:46.666 [2024-12-10 12:23:08.617763] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:46.666 [2024-12-10 12:23:08.618694] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:46.666 [2024-12-10 12:23:08.618703] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:46.666 [2024-12-10 12:23:08.618708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.618725] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:46.666 [2024-12-10 12:23:08.618737] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.618754] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.666 [2024-12-10 12:23:08.618759] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.666 [2024-12-10 12:23:08.618763] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.666 [2024-12-10 12:23:08.618776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.666 [2024-12-10 12:23:08.618830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:46.666 [2024-12-10 12:23:08.618839] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:46.666 [2024-12-10 12:23:08.618843] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:46.666 [2024-12-10 12:23:08.618847] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:46.666 [2024-12-10 12:23:08.618851] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:46.666 [2024-12-10 12:23:08.618856] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:46.666 [2024-12-10 12:23:08.618860] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:46.666 [2024-12-10 12:23:08.618864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.618871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.618881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:46.666 [2024-12-10 12:23:08.618893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:46.666 [2024-12-10 12:23:08.618903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.666 [2024-12-10 12:23:08.618911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.666 [2024-12-10 12:23:08.618921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.666 [2024-12-10 12:23:08.618929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:46.666 [2024-12-10 12:23:08.618933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.618940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.618948] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:46.666 [2024-12-10 12:23:08.618959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:46.666 [2024-12-10 12:23:08.618964] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:46.666 [2024-12-10 12:23:08.618968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.618976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.618981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.618989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.666 [2024-12-10 12:23:08.619004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:46.666 [2024-12-10 12:23:08.619055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.619062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:46.666 [2024-12-10 12:23:08.619069] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:46.666 [2024-12-10 12:23:08.619073] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:46.666 [2024-12-10 12:23:08.619076] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.667 [2024-12-10 12:23:08.619082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:46.667 [2024-12-10 12:23:08.619092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:46.667 [2024-12-10 12:23:08.619102] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:46.667 [2024-12-10 12:23:08.619110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619123] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.667 [2024-12-10 12:23:08.619127] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.667 [2024-12-10 12:23:08.619130] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.667 [2024-12-10 12:23:08.619136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.667 [2024-12-10 12:23:08.619169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:46.667 [2024-12-10 12:23:08.619180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619193] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:46.667 [2024-12-10 12:23:08.619197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.667 [2024-12-10 12:23:08.619200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.667 [2024-12-10 12:23:08.619206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.667 [2024-12-10 12:23:08.619215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:46.667 [2024-12-10 12:23:08.619225] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619258] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:46.667 [2024-12-10 12:23:08.619262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:46.667 [2024-12-10 12:23:08.619267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:46.667 [2024-12-10 12:23:08.619285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:46.667 [2024-12-10 12:23:08.619294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:46.667 [2024-12-10 12:23:08.619305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:46.667 [2024-12-10 12:23:08.619314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:46.667 [2024-12-10 12:23:08.619324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:46.667 [2024-12-10 12:23:08.619338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:46.667 [2024-12-10 12:23:08.619348] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:46.667 [2024-12-10 12:23:08.619358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:46.667 [2024-12-10 12:23:08.619371] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:46.667 [2024-12-10 12:23:08.619376] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:46.667 [2024-12-10 12:23:08.619379] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:46.667 [2024-12-10 12:23:08.619382] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:46.667 [2024-12-10 12:23:08.619385] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:46.667 [2024-12-10 12:23:08.619391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:46.667 [2024-12-10 12:23:08.619397] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:46.667 [2024-12-10 12:23:08.619401] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:46.667 [2024-12-10 12:23:08.619404] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.667 [2024-12-10 12:23:08.619409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:46.667 [2024-12-10 12:23:08.619415] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:46.667 [2024-12-10 12:23:08.619419] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:46.667 [2024-12-10 12:23:08.619422] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.667 [2024-12-10 12:23:08.619427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:46.667 [2024-12-10 12:23:08.619434] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:46.667 [2024-12-10 12:23:08.619438] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:46.667 [2024-12-10 12:23:08.619441] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:46.667 [2024-12-10 12:23:08.619446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:46.667 [2024-12-10 12:23:08.619452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:46.667 [2024-12-10 12:23:08.619462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:46.667 [2024-12-10 12:23:08.619471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:46.667 [2024-12-10 12:23:08.619478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:46.667 ===================================================== 00:14:46.667 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:46.667 ===================================================== 00:14:46.667 Controller Capabilities/Features 00:14:46.667 ================================ 00:14:46.667 Vendor ID: 4e58 00:14:46.667 Subsystem Vendor ID: 4e58 00:14:46.667 Serial Number: SPDK1 00:14:46.667 Model Number: SPDK bdev Controller 00:14:46.667 Firmware Version: 25.01 00:14:46.667 Recommended Arb Burst: 6 00:14:46.667 IEEE OUI Identifier: 8d 6b 50 00:14:46.667 Multi-path I/O 00:14:46.667 May have multiple subsystem ports: Yes 00:14:46.667 May have multiple controllers: Yes 00:14:46.667 Associated with SR-IOV VF: No 00:14:46.667 Max Data Transfer Size: 131072 00:14:46.667 Max Number of Namespaces: 32 00:14:46.667 Max Number of I/O Queues: 127 00:14:46.667 NVMe Specification Version (VS): 1.3 00:14:46.667 NVMe Specification Version (Identify): 1.3 00:14:46.667 Maximum Queue Entries: 256 00:14:46.667 Contiguous Queues Required: Yes 00:14:46.667 Arbitration Mechanisms Supported 00:14:46.667 Weighted Round Robin: Not Supported 00:14:46.667 Vendor Specific: Not Supported 00:14:46.667 Reset Timeout: 15000 ms 00:14:46.667 Doorbell Stride: 4 bytes 00:14:46.667 NVM Subsystem Reset: Not Supported 00:14:46.667 Command Sets Supported 00:14:46.667 NVM Command Set: Supported 00:14:46.667 Boot Partition: Not Supported 00:14:46.667 Memory Page Size Minimum: 4096 bytes 00:14:46.667 Memory Page Size Maximum: 4096 bytes 00:14:46.667 Persistent Memory Region: Not Supported 00:14:46.667 Optional Asynchronous Events Supported 00:14:46.667 Namespace Attribute Notices: Supported 00:14:46.667 Firmware Activation Notices: Not Supported 00:14:46.667 ANA Change Notices: Not Supported 00:14:46.667 PLE Aggregate Log Change Notices: Not Supported 00:14:46.667 LBA Status Info Alert Notices: Not Supported 00:14:46.667 EGE Aggregate Log Change Notices: Not Supported 00:14:46.667 Normal NVM Subsystem Shutdown event: Not Supported 00:14:46.667 Zone Descriptor Change Notices: Not Supported 00:14:46.667 Discovery Log Change Notices: Not Supported 00:14:46.667 Controller Attributes 00:14:46.667 128-bit Host Identifier: Supported 00:14:46.667 Non-Operational Permissive Mode: Not Supported 00:14:46.667 NVM Sets: Not Supported 00:14:46.667 Read Recovery Levels: Not Supported 00:14:46.667 Endurance Groups: Not Supported 00:14:46.667 Predictable Latency Mode: Not Supported 00:14:46.667 Traffic Based Keep ALive: Not Supported 00:14:46.667 Namespace Granularity: Not Supported 00:14:46.667 SQ Associations: Not Supported 00:14:46.667 UUID List: Not Supported 00:14:46.667 Multi-Domain Subsystem: Not Supported 00:14:46.667 Fixed Capacity Management: Not Supported 00:14:46.667 Variable Capacity Management: Not Supported 00:14:46.667 Delete Endurance Group: Not Supported 00:14:46.667 Delete NVM Set: Not Supported 00:14:46.667 Extended LBA Formats Supported: Not Supported 00:14:46.667 Flexible Data Placement Supported: Not Supported 00:14:46.667 00:14:46.667 Controller Memory Buffer Support 00:14:46.667 ================================ 00:14:46.667 Supported: No 00:14:46.668 00:14:46.668 Persistent Memory Region Support 00:14:46.668 ================================ 00:14:46.668 Supported: No 00:14:46.668 00:14:46.668 Admin Command Set Attributes 00:14:46.668 ============================ 00:14:46.668 Security Send/Receive: Not Supported 00:14:46.668 Format NVM: Not Supported 00:14:46.668 Firmware Activate/Download: Not Supported 00:14:46.668 Namespace Management: Not Supported 00:14:46.668 Device Self-Test: Not Supported 00:14:46.668 Directives: Not Supported 00:14:46.668 NVMe-MI: Not Supported 00:14:46.668 Virtualization Management: Not Supported 00:14:46.668 Doorbell Buffer Config: Not Supported 00:14:46.668 Get LBA Status Capability: Not Supported 00:14:46.668 Command & Feature Lockdown Capability: Not Supported 00:14:46.668 Abort Command Limit: 4 00:14:46.668 Async Event Request Limit: 4 00:14:46.668 Number of Firmware Slots: N/A 00:14:46.668 Firmware Slot 1 Read-Only: N/A 00:14:46.668 Firmware Activation Without Reset: N/A 00:14:46.668 Multiple Update Detection Support: N/A 00:14:46.668 Firmware Update Granularity: No Information Provided 00:14:46.668 Per-Namespace SMART Log: No 00:14:46.668 Asymmetric Namespace Access Log Page: Not Supported 00:14:46.668 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:46.668 Command Effects Log Page: Supported 00:14:46.668 Get Log Page Extended Data: Supported 00:14:46.668 Telemetry Log Pages: Not Supported 00:14:46.668 Persistent Event Log Pages: Not Supported 00:14:46.668 Supported Log Pages Log Page: May Support 00:14:46.668 Commands Supported & Effects Log Page: Not Supported 00:14:46.668 Feature Identifiers & Effects Log Page:May Support 00:14:46.668 NVMe-MI Commands & Effects Log Page: May Support 00:14:46.668 Data Area 4 for Telemetry Log: Not Supported 00:14:46.668 Error Log Page Entries Supported: 128 00:14:46.668 Keep Alive: Supported 00:14:46.668 Keep Alive Granularity: 10000 ms 00:14:46.668 00:14:46.668 NVM Command Set Attributes 00:14:46.668 ========================== 00:14:46.668 Submission Queue Entry Size 00:14:46.668 Max: 64 00:14:46.668 Min: 64 00:14:46.668 Completion Queue Entry Size 00:14:46.668 Max: 16 00:14:46.668 Min: 16 00:14:46.668 Number of Namespaces: 32 00:14:46.668 Compare Command: Supported 00:14:46.668 Write Uncorrectable Command: Not Supported 00:14:46.668 Dataset Management Command: Supported 00:14:46.668 Write Zeroes Command: Supported 00:14:46.668 Set Features Save Field: Not Supported 00:14:46.668 Reservations: Not Supported 00:14:46.668 Timestamp: Not Supported 00:14:46.668 Copy: Supported 00:14:46.668 Volatile Write Cache: Present 00:14:46.668 Atomic Write Unit (Normal): 1 00:14:46.668 Atomic Write Unit (PFail): 1 00:14:46.668 Atomic Compare & Write Unit: 1 00:14:46.668 Fused Compare & Write: Supported 00:14:46.668 Scatter-Gather List 00:14:46.668 SGL Command Set: Supported (Dword aligned) 00:14:46.668 SGL Keyed: Not Supported 00:14:46.668 SGL Bit Bucket Descriptor: Not Supported 00:14:46.668 SGL Metadata Pointer: Not Supported 00:14:46.668 Oversized SGL: Not Supported 00:14:46.668 SGL Metadata Address: Not Supported 00:14:46.668 SGL Offset: Not Supported 00:14:46.668 Transport SGL Data Block: Not Supported 00:14:46.668 Replay Protected Memory Block: Not Supported 00:14:46.668 00:14:46.668 Firmware Slot Information 00:14:46.668 ========================= 00:14:46.668 Active slot: 1 00:14:46.668 Slot 1 Firmware Revision: 25.01 00:14:46.668 00:14:46.668 00:14:46.668 Commands Supported and Effects 00:14:46.668 ============================== 00:14:46.668 Admin Commands 00:14:46.668 -------------- 00:14:46.668 Get Log Page (02h): Supported 00:14:46.668 Identify (06h): Supported 00:14:46.668 Abort (08h): Supported 00:14:46.668 Set Features (09h): Supported 00:14:46.668 Get Features (0Ah): Supported 00:14:46.668 Asynchronous Event Request (0Ch): Supported 00:14:46.668 Keep Alive (18h): Supported 00:14:46.668 I/O Commands 00:14:46.668 ------------ 00:14:46.668 Flush (00h): Supported LBA-Change 00:14:46.668 Write (01h): Supported LBA-Change 00:14:46.668 Read (02h): Supported 00:14:46.668 Compare (05h): Supported 00:14:46.668 Write Zeroes (08h): Supported LBA-Change 00:14:46.668 Dataset Management (09h): Supported LBA-Change 00:14:46.668 Copy (19h): Supported LBA-Change 00:14:46.668 00:14:46.668 Error Log 00:14:46.668 ========= 00:14:46.668 00:14:46.668 Arbitration 00:14:46.668 =========== 00:14:46.668 Arbitration Burst: 1 00:14:46.668 00:14:46.668 Power Management 00:14:46.668 ================ 00:14:46.668 Number of Power States: 1 00:14:46.668 Current Power State: Power State #0 00:14:46.668 Power State #0: 00:14:46.668 Max Power: 0.00 W 00:14:46.668 Non-Operational State: Operational 00:14:46.668 Entry Latency: Not Reported 00:14:46.668 Exit Latency: Not Reported 00:14:46.668 Relative Read Throughput: 0 00:14:46.668 Relative Read Latency: 0 00:14:46.668 Relative Write Throughput: 0 00:14:46.668 Relative Write Latency: 0 00:14:46.668 Idle Power: Not Reported 00:14:46.668 Active Power: Not Reported 00:14:46.668 Non-Operational Permissive Mode: Not Supported 00:14:46.668 00:14:46.668 Health Information 00:14:46.668 ================== 00:14:46.668 Critical Warnings: 00:14:46.668 Available Spare Space: OK 00:14:46.668 Temperature: OK 00:14:46.668 Device Reliability: OK 00:14:46.668 Read Only: No 00:14:46.668 Volatile Memory Backup: OK 00:14:46.668 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:46.668 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:46.668 Available Spare: 0% 00:14:46.668 Available Sp[2024-12-10 12:23:08.619564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:46.668 [2024-12-10 12:23:08.619579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:46.668 [2024-12-10 12:23:08.619606] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:46.668 [2024-12-10 12:23:08.619615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.668 [2024-12-10 12:23:08.619620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.668 [2024-12-10 12:23:08.619626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.668 [2024-12-10 12:23:08.619633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:46.668 [2024-12-10 12:23:08.621165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:46.668 [2024-12-10 12:23:08.621176] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:46.668 [2024-12-10 12:23:08.621713] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:46.668 [2024-12-10 12:23:08.621763] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:46.668 [2024-12-10 12:23:08.621769] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:46.668 [2024-12-10 12:23:08.622714] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:46.668 [2024-12-10 12:23:08.622724] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:46.668 [2024-12-10 12:23:08.622775] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:46.668 [2024-12-10 12:23:08.626168] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:46.668 are Threshold: 0% 00:14:46.668 Life Percentage Used: 0% 00:14:46.668 Data Units Read: 0 00:14:46.668 Data Units Written: 0 00:14:46.668 Host Read Commands: 0 00:14:46.668 Host Write Commands: 0 00:14:46.668 Controller Busy Time: 0 minutes 00:14:46.668 Power Cycles: 0 00:14:46.668 Power On Hours: 0 hours 00:14:46.668 Unsafe Shutdowns: 0 00:14:46.668 Unrecoverable Media Errors: 0 00:14:46.668 Lifetime Error Log Entries: 0 00:14:46.668 Warning Temperature Time: 0 minutes 00:14:46.668 Critical Temperature Time: 0 minutes 00:14:46.668 00:14:46.668 Number of Queues 00:14:46.668 ================ 00:14:46.668 Number of I/O Submission Queues: 127 00:14:46.668 Number of I/O Completion Queues: 127 00:14:46.668 00:14:46.668 Active Namespaces 00:14:46.668 ================= 00:14:46.668 Namespace ID:1 00:14:46.668 Error Recovery Timeout: Unlimited 00:14:46.668 Command Set Identifier: NVM (00h) 00:14:46.668 Deallocate: Supported 00:14:46.668 Deallocated/Unwritten Error: Not Supported 00:14:46.668 Deallocated Read Value: Unknown 00:14:46.668 Deallocate in Write Zeroes: Not Supported 00:14:46.668 Deallocated Guard Field: 0xFFFF 00:14:46.668 Flush: Supported 00:14:46.668 Reservation: Supported 00:14:46.668 Namespace Sharing Capabilities: Multiple Controllers 00:14:46.668 Size (in LBAs): 131072 (0GiB) 00:14:46.668 Capacity (in LBAs): 131072 (0GiB) 00:14:46.668 Utilization (in LBAs): 131072 (0GiB) 00:14:46.668 NGUID: 2B1586F2780F4447AB94E23DFC9A3D09 00:14:46.668 UUID: 2b1586f2-780f-4447-ab94-e23dfc9a3d09 00:14:46.668 Thin Provisioning: Not Supported 00:14:46.668 Per-NS Atomic Units: Yes 00:14:46.668 Atomic Boundary Size (Normal): 0 00:14:46.668 Atomic Boundary Size (PFail): 0 00:14:46.668 Atomic Boundary Offset: 0 00:14:46.668 Maximum Single Source Range Length: 65535 00:14:46.669 Maximum Copy Length: 65535 00:14:46.669 Maximum Source Range Count: 1 00:14:46.669 NGUID/EUI64 Never Reused: No 00:14:46.669 Namespace Write Protected: No 00:14:46.669 Number of LBA Formats: 1 00:14:46.669 Current LBA Format: LBA Format #00 00:14:46.669 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:46.669 00:14:46.669 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:46.927 [2024-12-10 12:23:08.855987] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.194 Initializing NVMe Controllers 00:14:52.194 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:52.194 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:52.194 Initialization complete. Launching workers. 00:14:52.194 ======================================================== 00:14:52.194 Latency(us) 00:14:52.194 Device Information : IOPS MiB/s Average min max 00:14:52.194 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39965.40 156.11 3202.87 1008.14 7163.23 00:14:52.194 ======================================================== 00:14:52.194 Total : 39965.40 156.11 3202.87 1008.14 7163.23 00:14:52.194 00:14:52.194 [2024-12-10 12:23:13.876847] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.194 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:52.194 [2024-12-10 12:23:14.110983] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.460 Initializing NVMe Controllers 00:14:57.460 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.460 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:57.460 Initialization complete. Launching workers. 00:14:57.460 ======================================================== 00:14:57.460 Latency(us) 00:14:57.460 Device Information : IOPS MiB/s Average min max 00:14:57.460 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16060.65 62.74 7975.15 5984.06 8978.08 00:14:57.460 ======================================================== 00:14:57.460 Total : 16060.65 62.74 7975.15 5984.06 8978.08 00:14:57.460 00:14:57.460 [2024-12-10 12:23:19.150091] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.460 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:57.460 [2024-12-10 12:23:19.355047] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.725 [2024-12-10 12:23:24.429442] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.725 Initializing NVMe Controllers 00:15:02.725 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.725 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.725 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:02.725 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:02.725 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:02.725 Initialization complete. Launching workers. 00:15:02.725 Starting thread on core 2 00:15:02.725 Starting thread on core 3 00:15:02.725 Starting thread on core 1 00:15:02.725 12:23:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:02.725 [2024-12-10 12:23:24.729561] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.914 [2024-12-10 12:23:28.359384] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.914 Initializing NVMe Controllers 00:15:06.914 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.914 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:06.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:06.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:06.914 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:06.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration run with configuration: 00:15:06.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:06.914 Initialization complete. Launching workers. 00:15:06.914 Starting thread on core 1 with urgent priority queue 00:15:06.914 Starting thread on core 2 with urgent priority queue 00:15:06.914 Starting thread on core 3 with urgent priority queue 00:15:06.914 Starting thread on core 0 with urgent priority queue 00:15:06.914 SPDK bdev Controller (SPDK1 ) core 0: 6681.33 IO/s 14.97 secs/100000 ios 00:15:06.914 SPDK bdev Controller (SPDK1 ) core 1: 5472.67 IO/s 18.27 secs/100000 ios 00:15:06.914 SPDK bdev Controller (SPDK1 ) core 2: 6716.33 IO/s 14.89 secs/100000 ios 00:15:06.914 SPDK bdev Controller (SPDK1 ) core 3: 5278.00 IO/s 18.95 secs/100000 ios 00:15:06.914 ======================================================== 00:15:06.914 00:15:06.914 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:06.914 [2024-12-10 12:23:28.650617] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.914 Initializing NVMe Controllers 00:15:06.914 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.914 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:06.914 Namespace ID: 1 size: 0GB 00:15:06.914 Initialization complete. 00:15:06.914 INFO: using host memory buffer for IO 00:15:06.914 Hello world! 00:15:06.914 [2024-12-10 12:23:28.684851] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.914 12:23:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:06.914 [2024-12-10 12:23:28.970644] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:07.850 Initializing NVMe Controllers 00:15:07.850 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.850 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:07.850 Initialization complete. Launching workers. 00:15:07.850 submit (in ns) avg, min, max = 6488.2, 3198.3, 3999961.7 00:15:07.850 complete (in ns) avg, min, max = 21605.0, 1751.3, 5993126.1 00:15:07.850 00:15:07.850 Submit histogram 00:15:07.850 ================ 00:15:07.850 Range in us Cumulative Count 00:15:07.850 3.186 - 3.200: 0.0127% ( 2) 00:15:07.850 3.200 - 3.214: 0.0891% ( 12) 00:15:07.850 3.214 - 3.228: 0.2991% ( 33) 00:15:07.850 3.228 - 3.242: 0.4519% ( 24) 00:15:07.850 3.242 - 3.256: 0.6555% ( 32) 00:15:07.850 3.256 - 3.270: 0.9356% ( 44) 00:15:07.850 3.270 - 3.283: 1.6739% ( 116) 00:15:07.850 3.283 - 3.297: 4.3343% ( 418) 00:15:07.850 3.297 - 3.311: 9.1522% ( 757) 00:15:07.850 3.311 - 3.325: 14.1230% ( 781) 00:15:07.850 3.325 - 3.339: 19.6983% ( 876) 00:15:07.850 3.339 - 3.353: 25.7765% ( 955) 00:15:07.850 3.353 - 3.367: 30.8490% ( 797) 00:15:07.850 3.367 - 3.381: 36.3226% ( 860) 00:15:07.850 3.381 - 3.395: 41.5542% ( 822) 00:15:07.850 3.395 - 3.409: 46.3086% ( 747) 00:15:07.850 3.409 - 3.423: 50.2228% ( 615) 00:15:07.850 3.423 - 3.437: 55.0980% ( 766) 00:15:07.850 3.437 - 3.450: 61.9908% ( 1083) 00:15:07.850 3.450 - 3.464: 66.9552% ( 780) 00:15:07.850 3.464 - 3.478: 71.6013% ( 730) 00:15:07.850 3.478 - 3.492: 76.8012% ( 817) 00:15:07.850 3.492 - 3.506: 80.7217% ( 616) 00:15:07.850 3.506 - 3.520: 83.5540% ( 445) 00:15:07.850 3.520 - 3.534: 85.3424% ( 281) 00:15:07.850 3.534 - 3.548: 86.5008% ( 182) 00:15:07.850 3.548 - 3.562: 87.2581% ( 119) 00:15:07.850 3.562 - 3.590: 88.2892% ( 162) 00:15:07.850 3.590 - 3.617: 89.7021% ( 222) 00:15:07.850 3.617 - 3.645: 91.3633% ( 261) 00:15:07.850 3.645 - 3.673: 92.9099% ( 243) 00:15:07.850 3.673 - 3.701: 94.7620% ( 291) 00:15:07.850 3.701 - 3.729: 96.2131% ( 228) 00:15:07.850 3.729 - 3.757: 97.4414% ( 193) 00:15:07.850 3.757 - 3.784: 98.3961% ( 150) 00:15:07.850 3.784 - 3.812: 98.9498% ( 87) 00:15:07.850 3.812 - 3.840: 99.2999% ( 55) 00:15:07.850 3.840 - 3.868: 99.5036% ( 32) 00:15:07.850 3.868 - 3.896: 99.6118% ( 17) 00:15:07.850 3.896 - 3.923: 99.6309% ( 3) 00:15:07.850 3.923 - 3.951: 99.6499% ( 3) 00:15:07.850 4.007 - 4.035: 99.6563% ( 1) 00:15:07.850 4.063 - 4.090: 99.6627% ( 1) 00:15:07.850 4.230 - 4.257: 99.6690% ( 1) 00:15:07.850 5.037 - 5.064: 99.6754% ( 1) 00:15:07.850 5.064 - 5.092: 99.6818% ( 1) 00:15:07.850 5.148 - 5.176: 99.6881% ( 1) 00:15:07.850 5.176 - 5.203: 99.6945% ( 1) 00:15:07.850 5.231 - 5.259: 99.7009% ( 1) 00:15:07.850 5.287 - 5.315: 99.7072% ( 1) 00:15:07.850 5.482 - 5.510: 99.7136% ( 1) 00:15:07.850 5.510 - 5.537: 99.7200% ( 1) 00:15:07.850 5.704 - 5.732: 99.7327% ( 2) 00:15:07.850 5.732 - 5.760: 99.7391% ( 1) 00:15:07.850 5.760 - 5.788: 99.7454% ( 1) 00:15:07.850 5.788 - 5.816: 99.7581% ( 2) 00:15:07.850 5.816 - 5.843: 99.7645% ( 1) 00:15:07.850 5.843 - 5.871: 99.7709% ( 1) 00:15:07.850 5.899 - 5.927: 99.7772% ( 1) 00:15:07.850 5.927 - 5.955: 99.7836% ( 1) 00:15:07.850 6.038 - 6.066: 99.7900% ( 1) 00:15:07.850 6.094 - 6.122: 99.8027% ( 2) 00:15:07.850 6.122 - 6.150: 99.8091% ( 1) 00:15:07.850 6.205 - 6.233: 99.8154% ( 1) 00:15:07.850 6.261 - 6.289: 99.8282% ( 2) 00:15:07.850 6.317 - 6.344: 99.8345% ( 1) 00:15:07.850 6.344 - 6.372: 99.8536% ( 3) 00:15:07.850 6.372 - 6.400: 99.8600% ( 1) 00:15:07.850 6.456 - 6.483: 99.8663% ( 1) 00:15:07.850 6.539 - 6.567: 99.8727% ( 1) 00:15:07.850 6.762 - 6.790: 99.8791% ( 1) 00:15:07.850 6.817 - 6.845: 99.8854% ( 1) 00:15:07.850 6.845 - 6.873: 99.8918% ( 1) 00:15:07.850 7.123 - 7.179: 99.8982% ( 1) 00:15:07.850 7.569 - 7.624: 99.9045% ( 1) 00:15:07.850 7.736 - 7.791: 99.9109% ( 1) 00:15:07.850 7.958 - 8.014: 99.9173% ( 1) 00:15:07.850 13.690 - 13.746: 99.9236% ( 1) 00:15:07.850 3846.678 - 3875.172: 99.9300% ( 1) 00:15:07.850 [2024-12-10 12:23:29.990796] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.109 3989.148 - 4017.642: 100.0000% ( 11) 00:15:08.109 00:15:08.109 Complete histogram 00:15:08.109 ================== 00:15:08.109 Range in us Cumulative Count 00:15:08.109 1.746 - 1.753: 0.0064% ( 1) 00:15:08.109 1.753 - 1.760: 0.0127% ( 1) 00:15:08.109 1.760 - 1.767: 0.0446% ( 5) 00:15:08.109 1.767 - 1.774: 0.2291% ( 29) 00:15:08.109 1.774 - 1.781: 0.3819% ( 24) 00:15:08.109 1.781 - 1.795: 0.5028% ( 19) 00:15:08.109 1.795 - 1.809: 0.5919% ( 14) 00:15:08.109 1.809 - 1.823: 4.3852% ( 596) 00:15:08.109 1.823 - 1.837: 16.3951% ( 1887) 00:15:08.109 1.837 - 1.850: 20.1247% ( 586) 00:15:08.109 1.850 - 1.864: 22.2951% ( 341) 00:15:08.109 1.864 - 1.878: 44.6729% ( 3516) 00:15:08.109 1.878 - 1.892: 84.2477% ( 6218) 00:15:08.109 1.892 - 1.906: 92.9990% ( 1375) 00:15:08.109 1.906 - 1.920: 96.1749% ( 499) 00:15:08.109 1.920 - 1.934: 96.8941% ( 113) 00:15:08.109 1.934 - 1.948: 97.5815% ( 108) 00:15:08.109 1.948 - 1.962: 98.5616% ( 154) 00:15:08.109 1.962 - 1.976: 99.1471% ( 92) 00:15:08.109 1.976 - 1.990: 99.2299% ( 13) 00:15:08.109 1.990 - 2.003: 99.2681% ( 6) 00:15:08.109 2.017 - 2.031: 99.2744% ( 1) 00:15:08.109 2.031 - 2.045: 99.2808% ( 1) 00:15:08.109 2.059 - 2.073: 99.2872% ( 1) 00:15:08.109 2.073 - 2.087: 99.2999% ( 2) 00:15:08.109 2.240 - 2.254: 99.3063% ( 1) 00:15:08.109 2.323 - 2.337: 99.3126% ( 1) 00:15:08.109 2.337 - 2.351: 99.3190% ( 1) 00:15:08.109 2.351 - 2.365: 99.3254% ( 1) 00:15:08.109 3.492 - 3.506: 99.3317% ( 1) 00:15:08.109 3.617 - 3.645: 99.3381% ( 1) 00:15:08.109 3.812 - 3.840: 99.3445% ( 1) 00:15:08.109 3.868 - 3.896: 99.3508% ( 1) 00:15:08.109 4.007 - 4.035: 99.3572% ( 1) 00:15:08.109 4.090 - 4.118: 99.3635% ( 1) 00:15:08.109 4.341 - 4.369: 99.3699% ( 1) 00:15:08.109 4.369 - 4.397: 99.3763% ( 1) 00:15:08.109 4.508 - 4.536: 99.3826% ( 1) 00:15:08.109 4.703 - 4.730: 99.3890% ( 1) 00:15:08.109 4.870 - 4.897: 99.3954% ( 1) 00:15:08.109 5.037 - 5.064: 99.4017% ( 1) 00:15:08.109 5.120 - 5.148: 99.4081% ( 1) 00:15:08.109 5.203 - 5.231: 99.4145% ( 1) 00:15:08.109 5.231 - 5.259: 99.4208% ( 1) 00:15:08.109 5.398 - 5.426: 99.4272% ( 1) 00:15:08.109 5.593 - 5.621: 99.4336% ( 1) 00:15:08.109 5.649 - 5.677: 99.4399% ( 1) 00:15:08.109 5.677 - 5.704: 99.4463% ( 1) 00:15:08.109 5.899 - 5.927: 99.4526% ( 1) 00:15:08.109 5.927 - 5.955: 99.4590% ( 1) 00:15:08.109 6.010 - 6.038: 99.4717% ( 2) 00:15:08.109 6.094 - 6.122: 99.4781% ( 1) 00:15:08.109 6.372 - 6.400: 99.4845% ( 1) 00:15:08.109 6.400 - 6.428: 99.4908% ( 1) 00:15:08.109 12.243 - 12.299: 99.4972% ( 1) 00:15:08.109 166.511 - 167.402: 99.5036% ( 1) 00:15:08.109 2023.068 - 2037.315: 99.5099% ( 1) 00:15:08.109 2165.537 - 2179.784: 99.5163% ( 1) 00:15:08.109 3989.148 - 4017.642: 99.9936% ( 75) 00:15:08.109 5983.722 - 6012.216: 100.0000% ( 1) 00:15:08.109 00:15:08.109 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:08.109 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:08.109 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:08.109 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:08.109 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:08.109 [ 00:15:08.109 { 00:15:08.109 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:08.109 "subtype": "Discovery", 00:15:08.109 "listen_addresses": [], 00:15:08.109 "allow_any_host": true, 00:15:08.109 "hosts": [] 00:15:08.109 }, 00:15:08.109 { 00:15:08.109 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:08.109 "subtype": "NVMe", 00:15:08.109 "listen_addresses": [ 00:15:08.109 { 00:15:08.109 "trtype": "VFIOUSER", 00:15:08.109 "adrfam": "IPv4", 00:15:08.109 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:08.109 "trsvcid": "0" 00:15:08.109 } 00:15:08.109 ], 00:15:08.109 "allow_any_host": true, 00:15:08.109 "hosts": [], 00:15:08.109 "serial_number": "SPDK1", 00:15:08.109 "model_number": "SPDK bdev Controller", 00:15:08.109 "max_namespaces": 32, 00:15:08.109 "min_cntlid": 1, 00:15:08.109 "max_cntlid": 65519, 00:15:08.109 "namespaces": [ 00:15:08.109 { 00:15:08.109 "nsid": 1, 00:15:08.109 "bdev_name": "Malloc1", 00:15:08.109 "name": "Malloc1", 00:15:08.109 "nguid": "2B1586F2780F4447AB94E23DFC9A3D09", 00:15:08.109 "uuid": "2b1586f2-780f-4447-ab94-e23dfc9a3d09" 00:15:08.109 } 00:15:08.109 ] 00:15:08.109 }, 00:15:08.109 { 00:15:08.109 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:08.109 "subtype": "NVMe", 00:15:08.109 "listen_addresses": [ 00:15:08.109 { 00:15:08.109 "trtype": "VFIOUSER", 00:15:08.109 "adrfam": "IPv4", 00:15:08.109 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:08.109 "trsvcid": "0" 00:15:08.109 } 00:15:08.109 ], 00:15:08.109 "allow_any_host": true, 00:15:08.109 "hosts": [], 00:15:08.109 "serial_number": "SPDK2", 00:15:08.109 "model_number": "SPDK bdev Controller", 00:15:08.109 "max_namespaces": 32, 00:15:08.109 "min_cntlid": 1, 00:15:08.109 "max_cntlid": 65519, 00:15:08.109 "namespaces": [ 00:15:08.109 { 00:15:08.109 "nsid": 1, 00:15:08.109 "bdev_name": "Malloc2", 00:15:08.109 "name": "Malloc2", 00:15:08.109 "nguid": "B6CFE917F328488FBA90F8528FE367C9", 00:15:08.109 "uuid": "b6cfe917-f328-488f-ba90-f8528fe367c9" 00:15:08.109 } 00:15:08.109 ] 00:15:08.109 } 00:15:08.109 ] 00:15:08.109 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:08.109 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1595779 00:15:08.109 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:08.109 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:08.109 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:08.110 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:08.110 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:08.110 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:08.110 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:08.110 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:08.368 [2024-12-10 12:23:30.402588] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:08.368 Malloc3 00:15:08.368 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:08.626 [2024-12-10 12:23:30.659589] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.627 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:08.627 Asynchronous Event Request test 00:15:08.627 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:08.627 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:08.627 Registering asynchronous event callbacks... 00:15:08.627 Starting namespace attribute notice tests for all controllers... 00:15:08.627 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:08.627 aer_cb - Changed Namespace 00:15:08.627 Cleaning up... 00:15:08.886 [ 00:15:08.886 { 00:15:08.886 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:08.886 "subtype": "Discovery", 00:15:08.886 "listen_addresses": [], 00:15:08.886 "allow_any_host": true, 00:15:08.886 "hosts": [] 00:15:08.886 }, 00:15:08.886 { 00:15:08.886 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:08.886 "subtype": "NVMe", 00:15:08.886 "listen_addresses": [ 00:15:08.886 { 00:15:08.886 "trtype": "VFIOUSER", 00:15:08.886 "adrfam": "IPv4", 00:15:08.886 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:08.887 "trsvcid": "0" 00:15:08.887 } 00:15:08.887 ], 00:15:08.887 "allow_any_host": true, 00:15:08.887 "hosts": [], 00:15:08.887 "serial_number": "SPDK1", 00:15:08.887 "model_number": "SPDK bdev Controller", 00:15:08.887 "max_namespaces": 32, 00:15:08.887 "min_cntlid": 1, 00:15:08.887 "max_cntlid": 65519, 00:15:08.887 "namespaces": [ 00:15:08.887 { 00:15:08.887 "nsid": 1, 00:15:08.887 "bdev_name": "Malloc1", 00:15:08.887 "name": "Malloc1", 00:15:08.887 "nguid": "2B1586F2780F4447AB94E23DFC9A3D09", 00:15:08.887 "uuid": "2b1586f2-780f-4447-ab94-e23dfc9a3d09" 00:15:08.887 }, 00:15:08.887 { 00:15:08.887 "nsid": 2, 00:15:08.887 "bdev_name": "Malloc3", 00:15:08.887 "name": "Malloc3", 00:15:08.887 "nguid": "65AB430561224B3D8B3896C9C7CC159E", 00:15:08.887 "uuid": "65ab4305-6122-4b3d-8b38-96c9c7cc159e" 00:15:08.887 } 00:15:08.887 ] 00:15:08.887 }, 00:15:08.887 { 00:15:08.887 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:08.887 "subtype": "NVMe", 00:15:08.887 "listen_addresses": [ 00:15:08.887 { 00:15:08.887 "trtype": "VFIOUSER", 00:15:08.887 "adrfam": "IPv4", 00:15:08.887 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:08.887 "trsvcid": "0" 00:15:08.887 } 00:15:08.887 ], 00:15:08.887 "allow_any_host": true, 00:15:08.887 "hosts": [], 00:15:08.887 "serial_number": "SPDK2", 00:15:08.887 "model_number": "SPDK bdev Controller", 00:15:08.887 "max_namespaces": 32, 00:15:08.887 "min_cntlid": 1, 00:15:08.887 "max_cntlid": 65519, 00:15:08.887 "namespaces": [ 00:15:08.887 { 00:15:08.887 "nsid": 1, 00:15:08.887 "bdev_name": "Malloc2", 00:15:08.887 "name": "Malloc2", 00:15:08.887 "nguid": "B6CFE917F328488FBA90F8528FE367C9", 00:15:08.887 "uuid": "b6cfe917-f328-488f-ba90-f8528fe367c9" 00:15:08.887 } 00:15:08.887 ] 00:15:08.887 } 00:15:08.887 ] 00:15:08.887 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1595779 00:15:08.887 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:08.887 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:08.887 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:08.887 12:23:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:08.887 [2024-12-10 12:23:30.913626] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:15:08.887 [2024-12-10 12:23:30.913657] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595982 ] 00:15:08.887 [2024-12-10 12:23:30.951958] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:08.887 [2024-12-10 12:23:30.960412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:08.887 [2024-12-10 12:23:30.960436] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd574cc8000 00:15:08.887 [2024-12-10 12:23:30.961412] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.887 [2024-12-10 12:23:30.962419] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.887 [2024-12-10 12:23:30.963426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.887 [2024-12-10 12:23:30.964430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.887 [2024-12-10 12:23:30.965431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.887 [2024-12-10 12:23:30.966439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.887 [2024-12-10 12:23:30.967446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:08.887 [2024-12-10 12:23:30.968456] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:08.887 [2024-12-10 12:23:30.969464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:08.887 [2024-12-10 12:23:30.969473] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd574cbd000 00:15:08.887 [2024-12-10 12:23:30.970416] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:08.887 [2024-12-10 12:23:30.979929] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:08.887 [2024-12-10 12:23:30.979957] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:08.887 [2024-12-10 12:23:30.985035] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:08.887 [2024-12-10 12:23:30.985071] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:08.887 [2024-12-10 12:23:30.985143] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:08.887 [2024-12-10 12:23:30.985159] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:08.887 [2024-12-10 12:23:30.985166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:08.887 [2024-12-10 12:23:30.986034] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:08.887 [2024-12-10 12:23:30.986043] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:08.887 [2024-12-10 12:23:30.986050] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:08.887 [2024-12-10 12:23:30.987037] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:08.887 [2024-12-10 12:23:30.987047] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:08.887 [2024-12-10 12:23:30.987053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:08.887 [2024-12-10 12:23:30.988043] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:08.887 [2024-12-10 12:23:30.988052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:08.887 [2024-12-10 12:23:30.989052] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:08.887 [2024-12-10 12:23:30.989060] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:08.887 [2024-12-10 12:23:30.989065] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:08.887 [2024-12-10 12:23:30.989071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:08.887 [2024-12-10 12:23:30.989179] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:08.887 [2024-12-10 12:23:30.989184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:08.887 [2024-12-10 12:23:30.989188] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:08.887 [2024-12-10 12:23:30.990061] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:08.887 [2024-12-10 12:23:30.991066] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:08.887 [2024-12-10 12:23:30.992075] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:08.887 [2024-12-10 12:23:30.993083] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:08.887 [2024-12-10 12:23:30.993129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:08.887 [2024-12-10 12:23:30.994102] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:08.887 [2024-12-10 12:23:30.994111] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:08.887 [2024-12-10 12:23:30.994116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:08.887 [2024-12-10 12:23:30.994136] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:08.887 [2024-12-10 12:23:30.994146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:08.887 [2024-12-10 12:23:30.994163] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.887 [2024-12-10 12:23:30.994168] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.887 [2024-12-10 12:23:30.994171] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.887 [2024-12-10 12:23:30.994182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.887 [2024-12-10 12:23:31.002164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:08.887 [2024-12-10 12:23:31.002176] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:08.887 [2024-12-10 12:23:31.002181] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:08.887 [2024-12-10 12:23:31.002185] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:08.887 [2024-12-10 12:23:31.002189] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:08.888 [2024-12-10 12:23:31.002193] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:08.888 [2024-12-10 12:23:31.002197] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:08.888 [2024-12-10 12:23:31.002202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.002209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.002218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:08.888 [2024-12-10 12:23:31.010162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:08.888 [2024-12-10 12:23:31.010174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.888 [2024-12-10 12:23:31.010181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.888 [2024-12-10 12:23:31.010189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.888 [2024-12-10 12:23:31.010196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:08.888 [2024-12-10 12:23:31.010201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.010209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.010218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:08.888 [2024-12-10 12:23:31.018163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:08.888 [2024-12-10 12:23:31.018171] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:08.888 [2024-12-10 12:23:31.018180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.018189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.018194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.018202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:08.888 [2024-12-10 12:23:31.026163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:08.888 [2024-12-10 12:23:31.026216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.026224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.026230] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:08.888 [2024-12-10 12:23:31.026235] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:08.888 [2024-12-10 12:23:31.026238] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.888 [2024-12-10 12:23:31.026243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:08.888 [2024-12-10 12:23:31.034161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:08.888 [2024-12-10 12:23:31.034175] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:08.888 [2024-12-10 12:23:31.034183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.034190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.034197] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.888 [2024-12-10 12:23:31.034200] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.888 [2024-12-10 12:23:31.034203] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.888 [2024-12-10 12:23:31.034209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.888 [2024-12-10 12:23:31.042162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:08.888 [2024-12-10 12:23:31.042172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.042179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.042186] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:08.888 [2024-12-10 12:23:31.042190] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:08.888 [2024-12-10 12:23:31.042193] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:08.888 [2024-12-10 12:23:31.042199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:08.888 [2024-12-10 12:23:31.050163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:08.888 [2024-12-10 12:23:31.050174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.050181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.050188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.050193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.050198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.050203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.050208] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:08.888 [2024-12-10 12:23:31.050212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:08.888 [2024-12-10 12:23:31.050217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:08.888 [2024-12-10 12:23:31.050232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:09.148 [2024-12-10 12:23:31.058161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:09.148 [2024-12-10 12:23:31.058174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:09.148 [2024-12-10 12:23:31.066161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:09.148 [2024-12-10 12:23:31.066173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:09.148 [2024-12-10 12:23:31.074161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:09.148 [2024-12-10 12:23:31.074173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:09.148 [2024-12-10 12:23:31.082161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:09.148 [2024-12-10 12:23:31.082176] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:09.148 [2024-12-10 12:23:31.082181] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:09.148 [2024-12-10 12:23:31.082184] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:09.148 [2024-12-10 12:23:31.082188] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:09.148 [2024-12-10 12:23:31.082191] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:09.148 [2024-12-10 12:23:31.082197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:09.148 [2024-12-10 12:23:31.082203] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:09.148 [2024-12-10 12:23:31.082207] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:09.148 [2024-12-10 12:23:31.082212] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:09.148 [2024-12-10 12:23:31.082218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:09.148 [2024-12-10 12:23:31.082224] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:09.148 [2024-12-10 12:23:31.082228] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:09.148 [2024-12-10 12:23:31.082231] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:09.148 [2024-12-10 12:23:31.082236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:09.148 [2024-12-10 12:23:31.082243] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:09.148 [2024-12-10 12:23:31.082247] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:09.148 [2024-12-10 12:23:31.082250] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:09.148 [2024-12-10 12:23:31.082255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:09.148 [2024-12-10 12:23:31.090162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:09.148 [2024-12-10 12:23:31.090175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:09.148 [2024-12-10 12:23:31.090185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:09.148 [2024-12-10 12:23:31.090191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:09.148 ===================================================== 00:15:09.148 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:09.148 ===================================================== 00:15:09.148 Controller Capabilities/Features 00:15:09.148 ================================ 00:15:09.148 Vendor ID: 4e58 00:15:09.148 Subsystem Vendor ID: 4e58 00:15:09.148 Serial Number: SPDK2 00:15:09.148 Model Number: SPDK bdev Controller 00:15:09.148 Firmware Version: 25.01 00:15:09.148 Recommended Arb Burst: 6 00:15:09.148 IEEE OUI Identifier: 8d 6b 50 00:15:09.148 Multi-path I/O 00:15:09.148 May have multiple subsystem ports: Yes 00:15:09.148 May have multiple controllers: Yes 00:15:09.148 Associated with SR-IOV VF: No 00:15:09.148 Max Data Transfer Size: 131072 00:15:09.148 Max Number of Namespaces: 32 00:15:09.148 Max Number of I/O Queues: 127 00:15:09.148 NVMe Specification Version (VS): 1.3 00:15:09.148 NVMe Specification Version (Identify): 1.3 00:15:09.148 Maximum Queue Entries: 256 00:15:09.148 Contiguous Queues Required: Yes 00:15:09.148 Arbitration Mechanisms Supported 00:15:09.148 Weighted Round Robin: Not Supported 00:15:09.148 Vendor Specific: Not Supported 00:15:09.148 Reset Timeout: 15000 ms 00:15:09.148 Doorbell Stride: 4 bytes 00:15:09.148 NVM Subsystem Reset: Not Supported 00:15:09.148 Command Sets Supported 00:15:09.148 NVM Command Set: Supported 00:15:09.148 Boot Partition: Not Supported 00:15:09.148 Memory Page Size Minimum: 4096 bytes 00:15:09.148 Memory Page Size Maximum: 4096 bytes 00:15:09.148 Persistent Memory Region: Not Supported 00:15:09.148 Optional Asynchronous Events Supported 00:15:09.148 Namespace Attribute Notices: Supported 00:15:09.148 Firmware Activation Notices: Not Supported 00:15:09.148 ANA Change Notices: Not Supported 00:15:09.148 PLE Aggregate Log Change Notices: Not Supported 00:15:09.148 LBA Status Info Alert Notices: Not Supported 00:15:09.148 EGE Aggregate Log Change Notices: Not Supported 00:15:09.148 Normal NVM Subsystem Shutdown event: Not Supported 00:15:09.148 Zone Descriptor Change Notices: Not Supported 00:15:09.148 Discovery Log Change Notices: Not Supported 00:15:09.148 Controller Attributes 00:15:09.148 128-bit Host Identifier: Supported 00:15:09.148 Non-Operational Permissive Mode: Not Supported 00:15:09.148 NVM Sets: Not Supported 00:15:09.148 Read Recovery Levels: Not Supported 00:15:09.148 Endurance Groups: Not Supported 00:15:09.149 Predictable Latency Mode: Not Supported 00:15:09.149 Traffic Based Keep ALive: Not Supported 00:15:09.149 Namespace Granularity: Not Supported 00:15:09.149 SQ Associations: Not Supported 00:15:09.149 UUID List: Not Supported 00:15:09.149 Multi-Domain Subsystem: Not Supported 00:15:09.149 Fixed Capacity Management: Not Supported 00:15:09.149 Variable Capacity Management: Not Supported 00:15:09.149 Delete Endurance Group: Not Supported 00:15:09.149 Delete NVM Set: Not Supported 00:15:09.149 Extended LBA Formats Supported: Not Supported 00:15:09.149 Flexible Data Placement Supported: Not Supported 00:15:09.149 00:15:09.149 Controller Memory Buffer Support 00:15:09.149 ================================ 00:15:09.149 Supported: No 00:15:09.149 00:15:09.149 Persistent Memory Region Support 00:15:09.149 ================================ 00:15:09.149 Supported: No 00:15:09.149 00:15:09.149 Admin Command Set Attributes 00:15:09.149 ============================ 00:15:09.149 Security Send/Receive: Not Supported 00:15:09.149 Format NVM: Not Supported 00:15:09.149 Firmware Activate/Download: Not Supported 00:15:09.149 Namespace Management: Not Supported 00:15:09.149 Device Self-Test: Not Supported 00:15:09.149 Directives: Not Supported 00:15:09.149 NVMe-MI: Not Supported 00:15:09.149 Virtualization Management: Not Supported 00:15:09.149 Doorbell Buffer Config: Not Supported 00:15:09.149 Get LBA Status Capability: Not Supported 00:15:09.149 Command & Feature Lockdown Capability: Not Supported 00:15:09.149 Abort Command Limit: 4 00:15:09.149 Async Event Request Limit: 4 00:15:09.149 Number of Firmware Slots: N/A 00:15:09.149 Firmware Slot 1 Read-Only: N/A 00:15:09.149 Firmware Activation Without Reset: N/A 00:15:09.149 Multiple Update Detection Support: N/A 00:15:09.149 Firmware Update Granularity: No Information Provided 00:15:09.149 Per-Namespace SMART Log: No 00:15:09.149 Asymmetric Namespace Access Log Page: Not Supported 00:15:09.149 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:09.149 Command Effects Log Page: Supported 00:15:09.149 Get Log Page Extended Data: Supported 00:15:09.149 Telemetry Log Pages: Not Supported 00:15:09.149 Persistent Event Log Pages: Not Supported 00:15:09.149 Supported Log Pages Log Page: May Support 00:15:09.149 Commands Supported & Effects Log Page: Not Supported 00:15:09.149 Feature Identifiers & Effects Log Page:May Support 00:15:09.149 NVMe-MI Commands & Effects Log Page: May Support 00:15:09.149 Data Area 4 for Telemetry Log: Not Supported 00:15:09.149 Error Log Page Entries Supported: 128 00:15:09.149 Keep Alive: Supported 00:15:09.149 Keep Alive Granularity: 10000 ms 00:15:09.149 00:15:09.149 NVM Command Set Attributes 00:15:09.149 ========================== 00:15:09.149 Submission Queue Entry Size 00:15:09.149 Max: 64 00:15:09.149 Min: 64 00:15:09.149 Completion Queue Entry Size 00:15:09.149 Max: 16 00:15:09.149 Min: 16 00:15:09.149 Number of Namespaces: 32 00:15:09.149 Compare Command: Supported 00:15:09.149 Write Uncorrectable Command: Not Supported 00:15:09.149 Dataset Management Command: Supported 00:15:09.149 Write Zeroes Command: Supported 00:15:09.149 Set Features Save Field: Not Supported 00:15:09.149 Reservations: Not Supported 00:15:09.149 Timestamp: Not Supported 00:15:09.149 Copy: Supported 00:15:09.149 Volatile Write Cache: Present 00:15:09.149 Atomic Write Unit (Normal): 1 00:15:09.149 Atomic Write Unit (PFail): 1 00:15:09.149 Atomic Compare & Write Unit: 1 00:15:09.149 Fused Compare & Write: Supported 00:15:09.149 Scatter-Gather List 00:15:09.149 SGL Command Set: Supported (Dword aligned) 00:15:09.149 SGL Keyed: Not Supported 00:15:09.149 SGL Bit Bucket Descriptor: Not Supported 00:15:09.149 SGL Metadata Pointer: Not Supported 00:15:09.149 Oversized SGL: Not Supported 00:15:09.149 SGL Metadata Address: Not Supported 00:15:09.149 SGL Offset: Not Supported 00:15:09.149 Transport SGL Data Block: Not Supported 00:15:09.149 Replay Protected Memory Block: Not Supported 00:15:09.149 00:15:09.149 Firmware Slot Information 00:15:09.149 ========================= 00:15:09.149 Active slot: 1 00:15:09.149 Slot 1 Firmware Revision: 25.01 00:15:09.149 00:15:09.149 00:15:09.149 Commands Supported and Effects 00:15:09.149 ============================== 00:15:09.149 Admin Commands 00:15:09.149 -------------- 00:15:09.149 Get Log Page (02h): Supported 00:15:09.149 Identify (06h): Supported 00:15:09.149 Abort (08h): Supported 00:15:09.149 Set Features (09h): Supported 00:15:09.149 Get Features (0Ah): Supported 00:15:09.149 Asynchronous Event Request (0Ch): Supported 00:15:09.149 Keep Alive (18h): Supported 00:15:09.149 I/O Commands 00:15:09.149 ------------ 00:15:09.149 Flush (00h): Supported LBA-Change 00:15:09.149 Write (01h): Supported LBA-Change 00:15:09.149 Read (02h): Supported 00:15:09.149 Compare (05h): Supported 00:15:09.149 Write Zeroes (08h): Supported LBA-Change 00:15:09.149 Dataset Management (09h): Supported LBA-Change 00:15:09.149 Copy (19h): Supported LBA-Change 00:15:09.149 00:15:09.149 Error Log 00:15:09.149 ========= 00:15:09.149 00:15:09.149 Arbitration 00:15:09.149 =========== 00:15:09.149 Arbitration Burst: 1 00:15:09.149 00:15:09.149 Power Management 00:15:09.149 ================ 00:15:09.149 Number of Power States: 1 00:15:09.149 Current Power State: Power State #0 00:15:09.149 Power State #0: 00:15:09.149 Max Power: 0.00 W 00:15:09.149 Non-Operational State: Operational 00:15:09.149 Entry Latency: Not Reported 00:15:09.149 Exit Latency: Not Reported 00:15:09.149 Relative Read Throughput: 0 00:15:09.149 Relative Read Latency: 0 00:15:09.149 Relative Write Throughput: 0 00:15:09.149 Relative Write Latency: 0 00:15:09.149 Idle Power: Not Reported 00:15:09.149 Active Power: Not Reported 00:15:09.149 Non-Operational Permissive Mode: Not Supported 00:15:09.149 00:15:09.149 Health Information 00:15:09.149 ================== 00:15:09.149 Critical Warnings: 00:15:09.149 Available Spare Space: OK 00:15:09.149 Temperature: OK 00:15:09.149 Device Reliability: OK 00:15:09.149 Read Only: No 00:15:09.149 Volatile Memory Backup: OK 00:15:09.149 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:09.149 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:09.149 Available Spare: 0% 00:15:09.149 Available Sp[2024-12-10 12:23:31.090282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:09.149 [2024-12-10 12:23:31.098163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:09.149 [2024-12-10 12:23:31.098198] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:09.149 [2024-12-10 12:23:31.098207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.149 [2024-12-10 12:23:31.098213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.149 [2024-12-10 12:23:31.098218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.149 [2024-12-10 12:23:31.098224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:09.149 [2024-12-10 12:23:31.098281] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:09.149 [2024-12-10 12:23:31.098291] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:09.149 [2024-12-10 12:23:31.099288] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:09.149 [2024-12-10 12:23:31.099333] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:09.149 [2024-12-10 12:23:31.099339] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:09.149 [2024-12-10 12:23:31.100297] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:09.149 [2024-12-10 12:23:31.100311] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:09.149 [2024-12-10 12:23:31.100359] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:09.149 [2024-12-10 12:23:31.101338] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:09.149 are Threshold: 0% 00:15:09.149 Life Percentage Used: 0% 00:15:09.149 Data Units Read: 0 00:15:09.149 Data Units Written: 0 00:15:09.149 Host Read Commands: 0 00:15:09.149 Host Write Commands: 0 00:15:09.149 Controller Busy Time: 0 minutes 00:15:09.149 Power Cycles: 0 00:15:09.149 Power On Hours: 0 hours 00:15:09.149 Unsafe Shutdowns: 0 00:15:09.149 Unrecoverable Media Errors: 0 00:15:09.149 Lifetime Error Log Entries: 0 00:15:09.149 Warning Temperature Time: 0 minutes 00:15:09.149 Critical Temperature Time: 0 minutes 00:15:09.149 00:15:09.149 Number of Queues 00:15:09.149 ================ 00:15:09.149 Number of I/O Submission Queues: 127 00:15:09.149 Number of I/O Completion Queues: 127 00:15:09.149 00:15:09.149 Active Namespaces 00:15:09.149 ================= 00:15:09.149 Namespace ID:1 00:15:09.149 Error Recovery Timeout: Unlimited 00:15:09.149 Command Set Identifier: NVM (00h) 00:15:09.149 Deallocate: Supported 00:15:09.149 Deallocated/Unwritten Error: Not Supported 00:15:09.150 Deallocated Read Value: Unknown 00:15:09.150 Deallocate in Write Zeroes: Not Supported 00:15:09.150 Deallocated Guard Field: 0xFFFF 00:15:09.150 Flush: Supported 00:15:09.150 Reservation: Supported 00:15:09.150 Namespace Sharing Capabilities: Multiple Controllers 00:15:09.150 Size (in LBAs): 131072 (0GiB) 00:15:09.150 Capacity (in LBAs): 131072 (0GiB) 00:15:09.150 Utilization (in LBAs): 131072 (0GiB) 00:15:09.150 NGUID: B6CFE917F328488FBA90F8528FE367C9 00:15:09.150 UUID: b6cfe917-f328-488f-ba90-f8528fe367c9 00:15:09.150 Thin Provisioning: Not Supported 00:15:09.150 Per-NS Atomic Units: Yes 00:15:09.150 Atomic Boundary Size (Normal): 0 00:15:09.150 Atomic Boundary Size (PFail): 0 00:15:09.150 Atomic Boundary Offset: 0 00:15:09.150 Maximum Single Source Range Length: 65535 00:15:09.150 Maximum Copy Length: 65535 00:15:09.150 Maximum Source Range Count: 1 00:15:09.150 NGUID/EUI64 Never Reused: No 00:15:09.150 Namespace Write Protected: No 00:15:09.150 Number of LBA Formats: 1 00:15:09.150 Current LBA Format: LBA Format #00 00:15:09.150 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:09.150 00:15:09.150 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:09.408 [2024-12-10 12:23:31.320554] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.677 Initializing NVMe Controllers 00:15:14.677 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:14.677 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:14.677 Initialization complete. Launching workers. 00:15:14.677 ======================================================== 00:15:14.677 Latency(us) 00:15:14.677 Device Information : IOPS MiB/s Average min max 00:15:14.677 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39939.28 156.01 3204.71 1014.96 10162.08 00:15:14.677 ======================================================== 00:15:14.677 Total : 39939.28 156.01 3204.71 1014.96 10162.08 00:15:14.677 00:15:14.677 [2024-12-10 12:23:36.426415] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.678 12:23:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:14.678 [2024-12-10 12:23:36.665140] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.948 Initializing NVMe Controllers 00:15:19.948 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.948 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:19.948 Initialization complete. Launching workers. 00:15:19.948 ======================================================== 00:15:19.948 Latency(us) 00:15:19.948 Device Information : IOPS MiB/s Average min max 00:15:19.948 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39953.52 156.07 3203.55 1005.88 6589.13 00:15:19.948 ======================================================== 00:15:19.948 Total : 39953.52 156.07 3203.55 1005.88 6589.13 00:15:19.948 00:15:19.948 [2024-12-10 12:23:41.686252] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.948 12:23:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:19.948 [2024-12-10 12:23:41.893730] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:25.219 [2024-12-10 12:23:47.033261] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:25.219 Initializing NVMe Controllers 00:15:25.219 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:25.219 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:25.219 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:25.219 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:25.219 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:25.219 Initialization complete. Launching workers. 00:15:25.219 Starting thread on core 2 00:15:25.219 Starting thread on core 3 00:15:25.219 Starting thread on core 1 00:15:25.219 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:25.219 [2024-12-10 12:23:47.337608] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.508 [2024-12-10 12:23:50.390028] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.508 Initializing NVMe Controllers 00:15:28.508 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.508 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.508 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:28.508 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:28.508 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:28.508 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:28.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration run with configuration: 00:15:28.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:28.508 Initialization complete. Launching workers. 00:15:28.508 Starting thread on core 1 with urgent priority queue 00:15:28.508 Starting thread on core 2 with urgent priority queue 00:15:28.508 Starting thread on core 3 with urgent priority queue 00:15:28.508 Starting thread on core 0 with urgent priority queue 00:15:28.508 SPDK bdev Controller (SPDK2 ) core 0: 9195.67 IO/s 10.87 secs/100000 ios 00:15:28.508 SPDK bdev Controller (SPDK2 ) core 1: 7532.00 IO/s 13.28 secs/100000 ios 00:15:28.508 SPDK bdev Controller (SPDK2 ) core 2: 7644.67 IO/s 13.08 secs/100000 ios 00:15:28.508 SPDK bdev Controller (SPDK2 ) core 3: 8292.33 IO/s 12.06 secs/100000 ios 00:15:28.508 ======================================================== 00:15:28.508 00:15:28.508 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:28.508 [2024-12-10 12:23:50.672179] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.813 Initializing NVMe Controllers 00:15:28.813 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.813 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:28.813 Namespace ID: 1 size: 0GB 00:15:28.813 Initialization complete. 00:15:28.813 INFO: using host memory buffer for IO 00:15:28.813 Hello world! 00:15:28.813 [2024-12-10 12:23:50.684279] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.813 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:28.813 [2024-12-10 12:23:50.966096] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:30.228 Initializing NVMe Controllers 00:15:30.228 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:30.228 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:30.228 Initialization complete. Launching workers. 00:15:30.228 submit (in ns) avg, min, max = 7274.8, 3213.0, 3999293.0 00:15:30.228 complete (in ns) avg, min, max = 22800.2, 1768.7, 4994860.0 00:15:30.228 00:15:30.228 Submit histogram 00:15:30.228 ================ 00:15:30.228 Range in us Cumulative Count 00:15:30.228 3.200 - 3.214: 0.0064% ( 1) 00:15:30.228 3.214 - 3.228: 0.0255% ( 3) 00:15:30.228 3.228 - 3.242: 0.0319% ( 1) 00:15:30.229 3.242 - 3.256: 0.0573% ( 4) 00:15:30.229 3.256 - 3.270: 0.0956% ( 6) 00:15:30.229 3.270 - 3.283: 0.2102% ( 18) 00:15:30.229 3.283 - 3.297: 1.1276% ( 144) 00:15:30.229 3.297 - 3.311: 4.3129% ( 500) 00:15:30.229 3.311 - 3.325: 9.0845% ( 749) 00:15:30.229 3.325 - 3.339: 14.5378% ( 856) 00:15:30.229 3.339 - 3.353: 21.0231% ( 1018) 00:15:30.229 3.353 - 3.367: 27.1071% ( 955) 00:15:30.229 3.367 - 3.381: 32.0826% ( 781) 00:15:30.229 3.381 - 3.395: 37.5549% ( 859) 00:15:30.229 3.395 - 3.409: 42.8617% ( 833) 00:15:30.229 3.409 - 3.423: 47.3211% ( 700) 00:15:30.229 3.423 - 3.437: 51.3028% ( 625) 00:15:30.229 3.437 - 3.450: 56.1317% ( 758) 00:15:30.229 3.450 - 3.464: 62.1902% ( 951) 00:15:30.229 3.464 - 3.478: 67.1530% ( 779) 00:15:30.229 3.478 - 3.492: 71.6697% ( 709) 00:15:30.229 3.492 - 3.506: 76.7344% ( 795) 00:15:30.229 3.506 - 3.520: 81.0728% ( 681) 00:15:30.229 3.520 - 3.534: 84.0479% ( 467) 00:15:30.229 3.534 - 3.548: 85.8699% ( 286) 00:15:30.229 3.548 - 3.562: 86.7236% ( 134) 00:15:30.229 3.562 - 3.590: 87.6664% ( 148) 00:15:30.229 3.590 - 3.617: 89.0489% ( 217) 00:15:30.229 3.617 - 3.645: 90.9282% ( 295) 00:15:30.229 3.645 - 3.673: 92.5336% ( 252) 00:15:30.229 3.673 - 3.701: 94.0626% ( 240) 00:15:30.229 3.701 - 3.729: 95.6998% ( 257) 00:15:30.229 3.729 - 3.757: 97.2033% ( 236) 00:15:30.229 3.757 - 3.784: 98.1653% ( 151) 00:15:30.229 3.784 - 3.812: 98.8851% ( 113) 00:15:30.229 3.812 - 3.840: 99.2929% ( 64) 00:15:30.229 3.840 - 3.868: 99.4712% ( 28) 00:15:30.229 3.868 - 3.896: 99.5604% ( 14) 00:15:30.229 3.896 - 3.923: 99.5859% ( 4) 00:15:30.229 3.923 - 3.951: 99.5986% ( 2) 00:15:30.229 3.979 - 4.007: 99.6178% ( 3) 00:15:30.229 4.063 - 4.090: 99.6241% ( 1) 00:15:30.229 4.341 - 4.369: 99.6305% ( 1) 00:15:30.229 5.037 - 5.064: 99.6369% ( 1) 00:15:30.229 5.176 - 5.203: 99.6432% ( 1) 00:15:30.229 5.259 - 5.287: 99.6687% ( 4) 00:15:30.229 5.315 - 5.343: 99.6751% ( 1) 00:15:30.229 5.454 - 5.482: 99.6878% ( 2) 00:15:30.229 5.537 - 5.565: 99.6942% ( 1) 00:15:30.229 5.565 - 5.593: 99.7006% ( 1) 00:15:30.229 5.677 - 5.704: 99.7070% ( 1) 00:15:30.229 5.760 - 5.788: 99.7197% ( 2) 00:15:30.229 5.871 - 5.899: 99.7261% ( 1) 00:15:30.229 5.983 - 6.010: 99.7388% ( 2) 00:15:30.229 6.066 - 6.094: 99.7452% ( 1) 00:15:30.229 6.094 - 6.122: 99.7515% ( 1) 00:15:30.229 6.122 - 6.150: 99.7579% ( 1) 00:15:30.229 6.205 - 6.233: 99.7643% ( 1) 00:15:30.229 6.289 - 6.317: 99.7770% ( 2) 00:15:30.229 6.317 - 6.344: 99.7834% ( 1) 00:15:30.229 6.344 - 6.372: 99.7898% ( 1) 00:15:30.229 6.372 - 6.400: 99.7961% ( 1) 00:15:30.229 6.400 - 6.428: 99.8025% ( 1) 00:15:30.229 6.456 - 6.483: 99.8089% ( 1) 00:15:30.229 6.483 - 6.511: 99.8153% ( 1) 00:15:30.229 6.567 - 6.595: 99.8280% ( 2) 00:15:30.229 6.595 - 6.623: 99.8344% ( 1) 00:15:30.229 6.706 - 6.734: 99.8407% ( 1) 00:15:30.229 6.734 - 6.762: 99.8471% ( 1) 00:15:30.229 6.790 - 6.817: 99.8535% ( 1) 00:15:30.229 6.901 - 6.929: 99.8598% ( 1) 00:15:30.229 6.984 - 7.012: 99.8662% ( 1) 00:15:30.229 7.123 - 7.179: 99.8726% ( 1) 00:15:30.229 7.179 - 7.235: 99.8790% ( 1) 00:15:30.229 7.346 - 7.402: 99.8853% ( 1) 00:15:30.229 7.513 - 7.569: 99.8917% ( 1) 00:15:30.229 7.736 - 7.791: 99.8981% ( 1) 00:15:30.229 8.070 - 8.125: 99.9044% ( 1) 00:15:30.229 3989.148 - 4017.642: 100.0000% ( 15) 00:15:30.229 00:15:30.229 [2024-12-10 12:23:52.061246] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:30.229 Complete histogram 00:15:30.229 ================== 00:15:30.229 Range in us Cumulative Count 00:15:30.229 1.767 - 1.774: 0.0446% ( 7) 00:15:30.229 1.774 - 1.781: 0.1019% ( 9) 00:15:30.229 1.781 - 1.795: 0.1274% ( 4) 00:15:30.229 1.795 - 1.809: 0.1847% ( 9) 00:15:30.229 1.809 - 1.823: 5.5233% ( 838) 00:15:30.229 1.823 - 1.837: 29.3559% ( 3741) 00:15:30.229 1.837 - 1.850: 36.2553% ( 1083) 00:15:30.229 1.850 - 1.864: 43.9128% ( 1202) 00:15:30.229 1.864 - 1.878: 77.3778% ( 5253) 00:15:30.229 1.878 - 1.892: 91.8137% ( 2266) 00:15:30.229 1.892 - 1.906: 95.3749% ( 559) 00:15:30.229 1.906 - 1.920: 96.8848% ( 237) 00:15:30.229 1.920 - 1.934: 97.4581% ( 90) 00:15:30.229 1.934 - 1.948: 98.3627% ( 142) 00:15:30.229 1.948 - 1.962: 98.9743% ( 96) 00:15:30.229 1.962 - 1.976: 99.1654% ( 30) 00:15:30.229 1.976 - 1.990: 99.2037% ( 6) 00:15:30.229 1.990 - 2.003: 99.2100% ( 1) 00:15:30.229 2.003 - 2.017: 99.2419% ( 5) 00:15:30.229 2.017 - 2.031: 99.2610% ( 3) 00:15:30.229 2.031 - 2.045: 99.2737% ( 2) 00:15:30.229 2.045 - 2.059: 99.2801% ( 1) 00:15:30.229 2.073 - 2.087: 99.2865% ( 1) 00:15:30.229 2.087 - 2.101: 99.2929% ( 1) 00:15:30.229 2.129 - 2.143: 99.2992% ( 1) 00:15:30.229 2.157 - 2.170: 99.3056% ( 1) 00:15:30.229 2.268 - 2.282: 99.3120% ( 1) 00:15:30.229 2.310 - 2.323: 99.3247% ( 2) 00:15:30.229 2.435 - 2.449: 99.3311% ( 1) 00:15:30.229 3.478 - 3.492: 99.3375% ( 1) 00:15:30.229 3.812 - 3.840: 99.3438% ( 1) 00:15:30.229 3.840 - 3.868: 99.3502% ( 1) 00:15:30.229 3.979 - 4.007: 99.3566% ( 1) 00:15:30.229 4.035 - 4.063: 99.3629% ( 1) 00:15:30.229 4.230 - 4.257: 99.3693% ( 1) 00:15:30.229 4.257 - 4.285: 99.3757% ( 1) 00:15:30.229 4.397 - 4.424: 99.3820% ( 1) 00:15:30.229 4.480 - 4.508: 99.3884% ( 1) 00:15:30.229 4.647 - 4.675: 99.3948% ( 1) 00:15:30.229 4.870 - 4.897: 99.4012% ( 1) 00:15:30.229 4.953 - 4.981: 99.4075% ( 1) 00:15:30.229 5.009 - 5.037: 99.4139% ( 1) 00:15:30.229 5.176 - 5.203: 99.4203% ( 1) 00:15:30.229 5.315 - 5.343: 99.4266% ( 1) 00:15:30.229 5.343 - 5.370: 99.4330% ( 1) 00:15:30.229 5.482 - 5.510: 99.4394% ( 1) 00:15:30.229 5.983 - 6.010: 99.4458% ( 1) 00:15:30.229 6.038 - 6.066: 99.4521% ( 1) 00:15:30.229 6.205 - 6.233: 99.4585% ( 1) 00:15:30.229 6.400 - 6.428: 99.4649% ( 1) 00:15:30.229 11.910 - 11.965: 99.4712% ( 1) 00:15:30.229 39.624 - 39.847: 99.4776% ( 1) 00:15:30.229 3989.148 - 4017.642: 99.9936% ( 81) 00:15:30.229 4986.435 - 5014.929: 100.0000% ( 1) 00:15:30.229 00:15:30.229 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:30.229 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:30.229 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:30.229 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:30.229 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:30.229 [ 00:15:30.229 { 00:15:30.229 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:30.229 "subtype": "Discovery", 00:15:30.229 "listen_addresses": [], 00:15:30.229 "allow_any_host": true, 00:15:30.229 "hosts": [] 00:15:30.229 }, 00:15:30.229 { 00:15:30.229 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:30.229 "subtype": "NVMe", 00:15:30.229 "listen_addresses": [ 00:15:30.229 { 00:15:30.229 "trtype": "VFIOUSER", 00:15:30.229 "adrfam": "IPv4", 00:15:30.229 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:30.229 "trsvcid": "0" 00:15:30.229 } 00:15:30.229 ], 00:15:30.229 "allow_any_host": true, 00:15:30.229 "hosts": [], 00:15:30.229 "serial_number": "SPDK1", 00:15:30.229 "model_number": "SPDK bdev Controller", 00:15:30.229 "max_namespaces": 32, 00:15:30.229 "min_cntlid": 1, 00:15:30.229 "max_cntlid": 65519, 00:15:30.229 "namespaces": [ 00:15:30.229 { 00:15:30.229 "nsid": 1, 00:15:30.229 "bdev_name": "Malloc1", 00:15:30.229 "name": "Malloc1", 00:15:30.229 "nguid": "2B1586F2780F4447AB94E23DFC9A3D09", 00:15:30.229 "uuid": "2b1586f2-780f-4447-ab94-e23dfc9a3d09" 00:15:30.229 }, 00:15:30.229 { 00:15:30.229 "nsid": 2, 00:15:30.229 "bdev_name": "Malloc3", 00:15:30.229 "name": "Malloc3", 00:15:30.229 "nguid": "65AB430561224B3D8B3896C9C7CC159E", 00:15:30.229 "uuid": "65ab4305-6122-4b3d-8b38-96c9c7cc159e" 00:15:30.229 } 00:15:30.229 ] 00:15:30.229 }, 00:15:30.229 { 00:15:30.229 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:30.229 "subtype": "NVMe", 00:15:30.229 "listen_addresses": [ 00:15:30.229 { 00:15:30.229 "trtype": "VFIOUSER", 00:15:30.229 "adrfam": "IPv4", 00:15:30.229 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:30.229 "trsvcid": "0" 00:15:30.229 } 00:15:30.229 ], 00:15:30.229 "allow_any_host": true, 00:15:30.229 "hosts": [], 00:15:30.229 "serial_number": "SPDK2", 00:15:30.229 "model_number": "SPDK bdev Controller", 00:15:30.229 "max_namespaces": 32, 00:15:30.229 "min_cntlid": 1, 00:15:30.229 "max_cntlid": 65519, 00:15:30.229 "namespaces": [ 00:15:30.229 { 00:15:30.230 "nsid": 1, 00:15:30.230 "bdev_name": "Malloc2", 00:15:30.230 "name": "Malloc2", 00:15:30.230 "nguid": "B6CFE917F328488FBA90F8528FE367C9", 00:15:30.230 "uuid": "b6cfe917-f328-488f-ba90-f8528fe367c9" 00:15:30.230 } 00:15:30.230 ] 00:15:30.230 } 00:15:30.230 ] 00:15:30.230 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:30.230 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:30.230 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1599446 00:15:30.230 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:30.230 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:30.230 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:30.230 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:30.230 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:30.230 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:30.230 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:30.488 [2024-12-10 12:23:52.455601] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:30.488 Malloc4 00:15:30.488 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:30.746 [2024-12-10 12:23:52.707693] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:30.746 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:30.746 Asynchronous Event Request test 00:15:30.746 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:30.746 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:30.746 Registering asynchronous event callbacks... 00:15:30.746 Starting namespace attribute notice tests for all controllers... 00:15:30.746 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:30.746 aer_cb - Changed Namespace 00:15:30.746 Cleaning up... 00:15:31.005 [ 00:15:31.005 { 00:15:31.005 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:31.005 "subtype": "Discovery", 00:15:31.005 "listen_addresses": [], 00:15:31.005 "allow_any_host": true, 00:15:31.005 "hosts": [] 00:15:31.005 }, 00:15:31.005 { 00:15:31.005 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:31.005 "subtype": "NVMe", 00:15:31.005 "listen_addresses": [ 00:15:31.005 { 00:15:31.005 "trtype": "VFIOUSER", 00:15:31.005 "adrfam": "IPv4", 00:15:31.005 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:31.005 "trsvcid": "0" 00:15:31.005 } 00:15:31.005 ], 00:15:31.005 "allow_any_host": true, 00:15:31.005 "hosts": [], 00:15:31.005 "serial_number": "SPDK1", 00:15:31.005 "model_number": "SPDK bdev Controller", 00:15:31.005 "max_namespaces": 32, 00:15:31.005 "min_cntlid": 1, 00:15:31.005 "max_cntlid": 65519, 00:15:31.005 "namespaces": [ 00:15:31.005 { 00:15:31.005 "nsid": 1, 00:15:31.005 "bdev_name": "Malloc1", 00:15:31.005 "name": "Malloc1", 00:15:31.005 "nguid": "2B1586F2780F4447AB94E23DFC9A3D09", 00:15:31.005 "uuid": "2b1586f2-780f-4447-ab94-e23dfc9a3d09" 00:15:31.005 }, 00:15:31.005 { 00:15:31.005 "nsid": 2, 00:15:31.005 "bdev_name": "Malloc3", 00:15:31.005 "name": "Malloc3", 00:15:31.005 "nguid": "65AB430561224B3D8B3896C9C7CC159E", 00:15:31.005 "uuid": "65ab4305-6122-4b3d-8b38-96c9c7cc159e" 00:15:31.005 } 00:15:31.005 ] 00:15:31.005 }, 00:15:31.005 { 00:15:31.005 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:31.005 "subtype": "NVMe", 00:15:31.005 "listen_addresses": [ 00:15:31.005 { 00:15:31.005 "trtype": "VFIOUSER", 00:15:31.005 "adrfam": "IPv4", 00:15:31.005 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:31.005 "trsvcid": "0" 00:15:31.005 } 00:15:31.005 ], 00:15:31.005 "allow_any_host": true, 00:15:31.005 "hosts": [], 00:15:31.005 "serial_number": "SPDK2", 00:15:31.005 "model_number": "SPDK bdev Controller", 00:15:31.005 "max_namespaces": 32, 00:15:31.005 "min_cntlid": 1, 00:15:31.005 "max_cntlid": 65519, 00:15:31.005 "namespaces": [ 00:15:31.005 { 00:15:31.005 "nsid": 1, 00:15:31.005 "bdev_name": "Malloc2", 00:15:31.005 "name": "Malloc2", 00:15:31.005 "nguid": "B6CFE917F328488FBA90F8528FE367C9", 00:15:31.005 "uuid": "b6cfe917-f328-488f-ba90-f8528fe367c9" 00:15:31.005 }, 00:15:31.005 { 00:15:31.005 "nsid": 2, 00:15:31.005 "bdev_name": "Malloc4", 00:15:31.005 "name": "Malloc4", 00:15:31.005 "nguid": "C3246F10BFF1425B8B99CBA384172AF9", 00:15:31.005 "uuid": "c3246f10-bff1-425b-8b99-cba384172af9" 00:15:31.005 } 00:15:31.005 ] 00:15:31.005 } 00:15:31.005 ] 00:15:31.005 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1599446 00:15:31.005 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:31.006 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1591736 00:15:31.006 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1591736 ']' 00:15:31.006 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1591736 00:15:31.006 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:31.006 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.006 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1591736 00:15:31.006 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.006 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.006 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1591736' 00:15:31.006 killing process with pid 1591736 00:15:31.006 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1591736 00:15:31.006 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1591736 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1599680 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1599680' 00:15:31.265 Process pid: 1599680 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1599680 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1599680 ']' 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.265 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:31.265 [2024-12-10 12:23:53.303088] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:31.265 [2024-12-10 12:23:53.303927] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:15:31.265 [2024-12-10 12:23:53.303965] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.265 [2024-12-10 12:23:53.362651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.265 [2024-12-10 12:23:53.404986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.265 [2024-12-10 12:23:53.405022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.265 [2024-12-10 12:23:53.405029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.265 [2024-12-10 12:23:53.405035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.265 [2024-12-10 12:23:53.405040] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.265 [2024-12-10 12:23:53.410177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.265 [2024-12-10 12:23:53.410212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.265 [2024-12-10 12:23:53.410321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.265 [2024-12-10 12:23:53.410321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.525 [2024-12-10 12:23:53.479862] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:31.525 [2024-12-10 12:23:53.480126] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:31.525 [2024-12-10 12:23:53.481017] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:31.525 [2024-12-10 12:23:53.481277] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:31.525 [2024-12-10 12:23:53.481313] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:31.525 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.525 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:31.525 12:23:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:32.461 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:32.720 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:32.720 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:32.720 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:32.720 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:32.720 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:32.979 Malloc1 00:15:32.979 12:23:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:33.237 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:33.237 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:33.495 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:33.495 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:33.495 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:33.753 Malloc2 00:15:33.753 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:34.012 12:23:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1599680 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1599680 ']' 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1599680 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1599680 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1599680' 00:15:34.271 killing process with pid 1599680 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1599680 00:15:34.271 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1599680 00:15:34.530 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:34.530 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:34.530 00:15:34.530 real 0m51.513s 00:15:34.530 user 3m19.367s 00:15:34.530 sys 0m3.259s 00:15:34.530 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.530 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:34.530 ************************************ 00:15:34.530 END TEST nvmf_vfio_user 00:15:34.530 ************************************ 00:15:34.530 12:23:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:34.530 12:23:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:34.530 12:23:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.530 12:23:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.790 ************************************ 00:15:34.790 START TEST nvmf_vfio_user_nvme_compliance 00:15:34.790 ************************************ 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:34.790 * Looking for test storage... 00:15:34.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:34.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.790 --rc genhtml_branch_coverage=1 00:15:34.790 --rc genhtml_function_coverage=1 00:15:34.790 --rc genhtml_legend=1 00:15:34.790 --rc geninfo_all_blocks=1 00:15:34.790 --rc geninfo_unexecuted_blocks=1 00:15:34.790 00:15:34.790 ' 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:34.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.790 --rc genhtml_branch_coverage=1 00:15:34.790 --rc genhtml_function_coverage=1 00:15:34.790 --rc genhtml_legend=1 00:15:34.790 --rc geninfo_all_blocks=1 00:15:34.790 --rc geninfo_unexecuted_blocks=1 00:15:34.790 00:15:34.790 ' 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:34.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.790 --rc genhtml_branch_coverage=1 00:15:34.790 --rc genhtml_function_coverage=1 00:15:34.790 --rc genhtml_legend=1 00:15:34.790 --rc geninfo_all_blocks=1 00:15:34.790 --rc geninfo_unexecuted_blocks=1 00:15:34.790 00:15:34.790 ' 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:34.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.790 --rc genhtml_branch_coverage=1 00:15:34.790 --rc genhtml_function_coverage=1 00:15:34.790 --rc genhtml_legend=1 00:15:34.790 --rc geninfo_all_blocks=1 00:15:34.790 --rc geninfo_unexecuted_blocks=1 00:15:34.790 00:15:34.790 ' 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.790 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1600325 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1600325' 00:15:34.791 Process pid: 1600325 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1600325 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1600325 ']' 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.791 12:23:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:34.791 [2024-12-10 12:23:56.954272] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:15:34.791 [2024-12-10 12:23:56.954322] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.050 [2024-12-10 12:23:57.032241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:35.050 [2024-12-10 12:23:57.074825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.050 [2024-12-10 12:23:57.074862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.050 [2024-12-10 12:23:57.074869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.050 [2024-12-10 12:23:57.074877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.050 [2024-12-10 12:23:57.074883] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.050 [2024-12-10 12:23:57.076291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.050 [2024-12-10 12:23:57.076323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.050 [2024-12-10 12:23:57.076324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.050 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.050 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:35.050 12:23:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:36.427 malloc0 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.427 12:23:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:36.427 00:15:36.427 00:15:36.427 CUnit - A unit testing framework for C - Version 2.1-3 00:15:36.427 http://cunit.sourceforge.net/ 00:15:36.427 00:15:36.427 00:15:36.427 Suite: nvme_compliance 00:15:36.427 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 12:23:58.422620] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.427 [2024-12-10 12:23:58.423979] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:36.427 [2024-12-10 12:23:58.423996] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:36.427 [2024-12-10 12:23:58.424003] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:36.427 [2024-12-10 12:23:58.425642] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.427 passed 00:15:36.427 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 12:23:58.505209] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.427 [2024-12-10 12:23:58.508225] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.427 passed 00:15:36.427 Test: admin_identify_ns ...[2024-12-10 12:23:58.585201] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.686 [2024-12-10 12:23:58.646171] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:36.686 [2024-12-10 12:23:58.654170] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:36.686 [2024-12-10 12:23:58.675264] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.686 passed 00:15:36.686 Test: admin_get_features_mandatory_features ...[2024-12-10 12:23:58.753124] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.686 [2024-12-10 12:23:58.759165] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.686 passed 00:15:36.686 Test: admin_get_features_optional_features ...[2024-12-10 12:23:58.834676] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.686 [2024-12-10 12:23:58.837694] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.944 passed 00:15:36.944 Test: admin_set_features_number_of_queues ...[2024-12-10 12:23:58.915631] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.944 [2024-12-10 12:23:59.021247] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:36.944 passed 00:15:36.944 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 12:23:59.097324] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:36.944 [2024-12-10 12:23:59.100343] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.202 passed 00:15:37.202 Test: admin_get_log_page_with_lpo ...[2024-12-10 12:23:59.179610] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.203 [2024-12-10 12:23:59.248167] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:37.203 [2024-12-10 12:23:59.261219] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.203 passed 00:15:37.203 Test: fabric_property_get ...[2024-12-10 12:23:59.338275] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.203 [2024-12-10 12:23:59.339518] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:37.203 [2024-12-10 12:23:59.341297] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.203 passed 00:15:37.461 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 12:23:59.418800] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.461 [2024-12-10 12:23:59.420027] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:37.461 [2024-12-10 12:23:59.421821] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.461 passed 00:15:37.461 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 12:23:59.498624] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.461 [2024-12-10 12:23:59.586173] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:37.461 [2024-12-10 12:23:59.602171] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:37.461 [2024-12-10 12:23:59.607271] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.719 passed 00:15:37.719 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 12:23:59.681442] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.720 [2024-12-10 12:23:59.682676] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:37.720 [2024-12-10 12:23:59.684467] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.720 passed 00:15:37.720 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 12:23:59.763466] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.720 [2024-12-10 12:23:59.840169] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:37.720 [2024-12-10 12:23:59.864177] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:37.720 [2024-12-10 12:23:59.869272] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.978 passed 00:15:37.978 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 12:23:59.945434] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.978 [2024-12-10 12:23:59.946684] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:37.978 [2024-12-10 12:23:59.946707] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:37.978 [2024-12-10 12:23:59.948461] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:37.978 passed 00:15:37.978 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 12:24:00.026753] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:37.978 [2024-12-10 12:24:00.116166] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:37.978 [2024-12-10 12:24:00.124167] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:37.978 [2024-12-10 12:24:00.130392] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:37.978 [2024-12-10 12:24:00.139180] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:38.237 [2024-12-10 12:24:00.168274] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.237 passed 00:15:38.237 Test: admin_create_io_sq_verify_pc ...[2024-12-10 12:24:00.245586] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.237 [2024-12-10 12:24:00.264176] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:38.237 [2024-12-10 12:24:00.281788] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.237 passed 00:15:38.237 Test: admin_create_io_qp_max_qps ...[2024-12-10 12:24:00.358353] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.611 [2024-12-10 12:24:01.446087] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:39.869 [2024-12-10 12:24:01.818262] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.869 passed 00:15:39.869 Test: admin_create_io_sq_shared_cq ...[2024-12-10 12:24:01.895522] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.869 [2024-12-10 12:24:02.031170] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:40.127 [2024-12-10 12:24:02.068220] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.127 passed 00:15:40.127 00:15:40.127 Run Summary: Type Total Ran Passed Failed Inactive 00:15:40.127 suites 1 1 n/a 0 0 00:15:40.127 tests 18 18 18 0 0 00:15:40.127 asserts 360 360 360 0 n/a 00:15:40.127 00:15:40.127 Elapsed time = 1.496 seconds 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1600325 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1600325 ']' 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1600325 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1600325 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1600325' 00:15:40.128 killing process with pid 1600325 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1600325 00:15:40.128 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1600325 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:40.387 00:15:40.387 real 0m5.637s 00:15:40.387 user 0m15.755s 00:15:40.387 sys 0m0.521s 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:40.387 ************************************ 00:15:40.387 END TEST nvmf_vfio_user_nvme_compliance 00:15:40.387 ************************************ 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.387 ************************************ 00:15:40.387 START TEST nvmf_vfio_user_fuzz 00:15:40.387 ************************************ 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:40.387 * Looking for test storage... 00:15:40.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:40.387 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:40.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.647 --rc genhtml_branch_coverage=1 00:15:40.647 --rc genhtml_function_coverage=1 00:15:40.647 --rc genhtml_legend=1 00:15:40.647 --rc geninfo_all_blocks=1 00:15:40.647 --rc geninfo_unexecuted_blocks=1 00:15:40.647 00:15:40.647 ' 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:40.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.647 --rc genhtml_branch_coverage=1 00:15:40.647 --rc genhtml_function_coverage=1 00:15:40.647 --rc genhtml_legend=1 00:15:40.647 --rc geninfo_all_blocks=1 00:15:40.647 --rc geninfo_unexecuted_blocks=1 00:15:40.647 00:15:40.647 ' 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:40.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.647 --rc genhtml_branch_coverage=1 00:15:40.647 --rc genhtml_function_coverage=1 00:15:40.647 --rc genhtml_legend=1 00:15:40.647 --rc geninfo_all_blocks=1 00:15:40.647 --rc geninfo_unexecuted_blocks=1 00:15:40.647 00:15:40.647 ' 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:40.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.647 --rc genhtml_branch_coverage=1 00:15:40.647 --rc genhtml_function_coverage=1 00:15:40.647 --rc genhtml_legend=1 00:15:40.647 --rc geninfo_all_blocks=1 00:15:40.647 --rc geninfo_unexecuted_blocks=1 00:15:40.647 00:15:40.647 ' 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.647 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1601502 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1601502' 00:15:40.648 Process pid: 1601502 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1601502 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1601502 ']' 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:40.648 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:40.907 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.907 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:40.907 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.844 malloc0 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:41.844 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:13.926 Fuzzing completed. Shutting down the fuzz application 00:16:13.926 00:16:13.926 Dumping successful admin opcodes: 00:16:13.926 9, 10, 00:16:13.926 Dumping successful io opcodes: 00:16:13.926 0, 00:16:13.926 NS: 0x20000081ef00 I/O qp, Total commands completed: 962998, total successful commands: 3773, random_seed: 116249536 00:16:13.926 NS: 0x20000081ef00 admin qp, Total commands completed: 234064, total successful commands: 53, random_seed: 1513581440 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1601502 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1601502 ']' 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1601502 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1601502 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1601502' 00:16:13.926 killing process with pid 1601502 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1601502 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1601502 00:16:13.926 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:13.927 00:16:13.927 real 0m32.236s 00:16:13.927 user 0m29.587s 00:16:13.927 sys 0m31.198s 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:13.927 ************************************ 00:16:13.927 END TEST nvmf_vfio_user_fuzz 00:16:13.927 ************************************ 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.927 ************************************ 00:16:13.927 START TEST nvmf_auth_target 00:16:13.927 ************************************ 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:13.927 * Looking for test storage... 00:16:13.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.927 --rc genhtml_branch_coverage=1 00:16:13.927 --rc genhtml_function_coverage=1 00:16:13.927 --rc genhtml_legend=1 00:16:13.927 --rc geninfo_all_blocks=1 00:16:13.927 --rc geninfo_unexecuted_blocks=1 00:16:13.927 00:16:13.927 ' 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.927 --rc genhtml_branch_coverage=1 00:16:13.927 --rc genhtml_function_coverage=1 00:16:13.927 --rc genhtml_legend=1 00:16:13.927 --rc geninfo_all_blocks=1 00:16:13.927 --rc geninfo_unexecuted_blocks=1 00:16:13.927 00:16:13.927 ' 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.927 --rc genhtml_branch_coverage=1 00:16:13.927 --rc genhtml_function_coverage=1 00:16:13.927 --rc genhtml_legend=1 00:16:13.927 --rc geninfo_all_blocks=1 00:16:13.927 --rc geninfo_unexecuted_blocks=1 00:16:13.927 00:16:13.927 ' 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.927 --rc genhtml_branch_coverage=1 00:16:13.927 --rc genhtml_function_coverage=1 00:16:13.927 --rc genhtml_legend=1 00:16:13.927 --rc geninfo_all_blocks=1 00:16:13.927 --rc geninfo_unexecuted_blocks=1 00:16:13.927 00:16:13.927 ' 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.927 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:13.928 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:19.206 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:19.206 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:19.206 Found net devices under 0000:86:00.0: cvl_0_0 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:19.206 Found net devices under 0000:86:00.1: cvl_0_1 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:19.206 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:19.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:16:19.207 00:16:19.207 --- 10.0.0.2 ping statistics --- 00:16:19.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.207 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:19.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:16:19.207 00:16:19.207 --- 10.0.0.1 ping statistics --- 00:16:19.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.207 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1610245 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1610245 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1610245 ']' 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.207 12:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1610266 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1e9661b3dafa3931177c00e0c48a00885ab1ac4989b87f86 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4q9 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1e9661b3dafa3931177c00e0c48a00885ab1ac4989b87f86 0 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1e9661b3dafa3931177c00e0c48a00885ab1ac4989b87f86 0 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1e9661b3dafa3931177c00e0c48a00885ab1ac4989b87f86 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4q9 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4q9 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.4q9 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9df1d47a377cdb95bf8a4d718ff54b156a17a21bc5e35940cd7e4128b953cd70 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.A4M 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9df1d47a377cdb95bf8a4d718ff54b156a17a21bc5e35940cd7e4128b953cd70 3 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9df1d47a377cdb95bf8a4d718ff54b156a17a21bc5e35940cd7e4128b953cd70 3 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9df1d47a377cdb95bf8a4d718ff54b156a17a21bc5e35940cd7e4128b953cd70 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.A4M 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.A4M 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.A4M 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7e5d86b304a8a37d4c661882069d42d0 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.c86 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7e5d86b304a8a37d4c661882069d42d0 1 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7e5d86b304a8a37d4c661882069d42d0 1 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7e5d86b304a8a37d4c661882069d42d0 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.c86 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.c86 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.c86 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.207 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=211e10ad21f801898c70db9d038989847bd9fad7070e9c52 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bt6 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 211e10ad21f801898c70db9d038989847bd9fad7070e9c52 2 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 211e10ad21f801898c70db9d038989847bd9fad7070e9c52 2 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=211e10ad21f801898c70db9d038989847bd9fad7070e9c52 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:19.208 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bt6 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bt6 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Bt6 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=94fdb30efdbd9a21b8a3a9a83fc2029bb46b16158d0b89a4 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.aAs 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 94fdb30efdbd9a21b8a3a9a83fc2029bb46b16158d0b89a4 2 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 94fdb30efdbd9a21b8a3a9a83fc2029bb46b16158d0b89a4 2 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=94fdb30efdbd9a21b8a3a9a83fc2029bb46b16158d0b89a4 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.aAs 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.aAs 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.aAs 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.467 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0176392edbcc00ccc1a2106b3814bc42 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9ny 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0176392edbcc00ccc1a2106b3814bc42 1 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0176392edbcc00ccc1a2106b3814bc42 1 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0176392edbcc00ccc1a2106b3814bc42 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9ny 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9ny 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.9ny 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=59d355cbec32f529f05ffb1f5a4807be23e0e0da08fa575d669c7e79f64955c5 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.gOG 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 59d355cbec32f529f05ffb1f5a4807be23e0e0da08fa575d669c7e79f64955c5 3 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 59d355cbec32f529f05ffb1f5a4807be23e0e0da08fa575d669c7e79f64955c5 3 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=59d355cbec32f529f05ffb1f5a4807be23e0e0da08fa575d669c7e79f64955c5 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.gOG 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.gOG 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.gOG 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1610245 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1610245 ']' 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.468 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.727 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.727 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:19.727 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1610266 /var/tmp/host.sock 00:16:19.727 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1610266 ']' 00:16:19.727 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:19.727 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.727 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:19.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:19.727 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.727 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.986 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.986 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:19.986 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:19.986 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.986 12:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.986 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.986 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:19.986 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4q9 00:16:19.986 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.986 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.986 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.986 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.4q9 00:16:19.986 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.4q9 00:16:20.244 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.A4M ]] 00:16:20.244 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.A4M 00:16:20.244 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.244 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.244 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.244 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.A4M 00:16:20.245 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.A4M 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.c86 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.c86 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.c86 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Bt6 ]] 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bt6 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bt6 00:16:20.503 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bt6 00:16:20.762 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:20.762 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.aAs 00:16:20.762 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.762 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.762 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.aAs 00:16:20.762 12:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.aAs 00:16:21.021 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.9ny ]] 00:16:21.021 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9ny 00:16:21.021 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.021 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.021 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.021 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9ny 00:16:21.021 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9ny 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gOG 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gOG 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gOG 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.279 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.538 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.797 00:16:21.797 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.797 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.797 12:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.055 { 00:16:22.055 "cntlid": 1, 00:16:22.055 "qid": 0, 00:16:22.055 "state": "enabled", 00:16:22.055 "thread": "nvmf_tgt_poll_group_000", 00:16:22.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.055 "listen_address": { 00:16:22.055 "trtype": "TCP", 00:16:22.055 "adrfam": "IPv4", 00:16:22.055 "traddr": "10.0.0.2", 00:16:22.055 "trsvcid": "4420" 00:16:22.055 }, 00:16:22.055 "peer_address": { 00:16:22.055 "trtype": "TCP", 00:16:22.055 "adrfam": "IPv4", 00:16:22.055 "traddr": "10.0.0.1", 00:16:22.055 "trsvcid": "60622" 00:16:22.055 }, 00:16:22.055 "auth": { 00:16:22.055 "state": "completed", 00:16:22.055 "digest": "sha256", 00:16:22.055 "dhgroup": "null" 00:16:22.055 } 00:16:22.055 } 00:16:22.055 ]' 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.055 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.313 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:22.314 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:22.881 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.881 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.881 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.881 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.881 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.881 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.881 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:22.881 12:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.138 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.395 00:16:23.395 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.395 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.395 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.654 { 00:16:23.654 "cntlid": 3, 00:16:23.654 "qid": 0, 00:16:23.654 "state": "enabled", 00:16:23.654 "thread": "nvmf_tgt_poll_group_000", 00:16:23.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.654 "listen_address": { 00:16:23.654 "trtype": "TCP", 00:16:23.654 "adrfam": "IPv4", 00:16:23.654 "traddr": "10.0.0.2", 00:16:23.654 "trsvcid": "4420" 00:16:23.654 }, 00:16:23.654 "peer_address": { 00:16:23.654 "trtype": "TCP", 00:16:23.654 "adrfam": "IPv4", 00:16:23.654 "traddr": "10.0.0.1", 00:16:23.654 "trsvcid": "60648" 00:16:23.654 }, 00:16:23.654 "auth": { 00:16:23.654 "state": "completed", 00:16:23.654 "digest": "sha256", 00:16:23.654 "dhgroup": "null" 00:16:23.654 } 00:16:23.654 } 00:16:23.654 ]' 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.654 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.913 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:23.913 12:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:24.480 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.480 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.480 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.480 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.480 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.480 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.480 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.480 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.739 12:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.998 00:16:24.998 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.998 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.998 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.257 { 00:16:25.257 "cntlid": 5, 00:16:25.257 "qid": 0, 00:16:25.257 "state": "enabled", 00:16:25.257 "thread": "nvmf_tgt_poll_group_000", 00:16:25.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.257 "listen_address": { 00:16:25.257 "trtype": "TCP", 00:16:25.257 "adrfam": "IPv4", 00:16:25.257 "traddr": "10.0.0.2", 00:16:25.257 "trsvcid": "4420" 00:16:25.257 }, 00:16:25.257 "peer_address": { 00:16:25.257 "trtype": "TCP", 00:16:25.257 "adrfam": "IPv4", 00:16:25.257 "traddr": "10.0.0.1", 00:16:25.257 "trsvcid": "60668" 00:16:25.257 }, 00:16:25.257 "auth": { 00:16:25.257 "state": "completed", 00:16:25.257 "digest": "sha256", 00:16:25.257 "dhgroup": "null" 00:16:25.257 } 00:16:25.257 } 00:16:25.257 ]' 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.257 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.515 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:25.515 12:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:26.083 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.083 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.083 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.083 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.083 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.083 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.083 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.083 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:26.342 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:26.342 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.342 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.342 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.342 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:26.342 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.342 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:26.342 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.343 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.343 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.343 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.343 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.343 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.601 00:16:26.601 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.601 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.601 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.601 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.860 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.860 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.860 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.860 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.860 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.860 { 00:16:26.860 "cntlid": 7, 00:16:26.860 "qid": 0, 00:16:26.860 "state": "enabled", 00:16:26.860 "thread": "nvmf_tgt_poll_group_000", 00:16:26.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.860 "listen_address": { 00:16:26.860 "trtype": "TCP", 00:16:26.860 "adrfam": "IPv4", 00:16:26.860 "traddr": "10.0.0.2", 00:16:26.860 "trsvcid": "4420" 00:16:26.860 }, 00:16:26.860 "peer_address": { 00:16:26.860 "trtype": "TCP", 00:16:26.860 "adrfam": "IPv4", 00:16:26.861 "traddr": "10.0.0.1", 00:16:26.861 "trsvcid": "60706" 00:16:26.861 }, 00:16:26.861 "auth": { 00:16:26.861 "state": "completed", 00:16:26.861 "digest": "sha256", 00:16:26.861 "dhgroup": "null" 00:16:26.861 } 00:16:26.861 } 00:16:26.861 ]' 00:16:26.861 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.861 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.861 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.861 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:26.861 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.861 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.861 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.861 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.119 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:16:27.120 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:16:27.687 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.687 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.687 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.687 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.687 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.687 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.687 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.687 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.687 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.946 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.205 00:16:28.205 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.205 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.205 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.464 { 00:16:28.464 "cntlid": 9, 00:16:28.464 "qid": 0, 00:16:28.464 "state": "enabled", 00:16:28.464 "thread": "nvmf_tgt_poll_group_000", 00:16:28.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.464 "listen_address": { 00:16:28.464 "trtype": "TCP", 00:16:28.464 "adrfam": "IPv4", 00:16:28.464 "traddr": "10.0.0.2", 00:16:28.464 "trsvcid": "4420" 00:16:28.464 }, 00:16:28.464 "peer_address": { 00:16:28.464 "trtype": "TCP", 00:16:28.464 "adrfam": "IPv4", 00:16:28.464 "traddr": "10.0.0.1", 00:16:28.464 "trsvcid": "35886" 00:16:28.464 }, 00:16:28.464 "auth": { 00:16:28.464 "state": "completed", 00:16:28.464 "digest": "sha256", 00:16:28.464 "dhgroup": "ffdhe2048" 00:16:28.464 } 00:16:28.464 } 00:16:28.464 ]' 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.464 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.723 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:28.723 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:29.294 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.294 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.294 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.295 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.295 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.295 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.295 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.295 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.554 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.813 00:16:29.813 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.813 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.813 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.813 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.813 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.813 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.813 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.813 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.813 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.813 { 00:16:29.813 "cntlid": 11, 00:16:29.813 "qid": 0, 00:16:29.813 "state": "enabled", 00:16:29.813 "thread": "nvmf_tgt_poll_group_000", 00:16:29.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.813 "listen_address": { 00:16:29.813 "trtype": "TCP", 00:16:29.813 "adrfam": "IPv4", 00:16:29.813 "traddr": "10.0.0.2", 00:16:29.813 "trsvcid": "4420" 00:16:29.813 }, 00:16:29.813 "peer_address": { 00:16:29.813 "trtype": "TCP", 00:16:29.813 "adrfam": "IPv4", 00:16:29.813 "traddr": "10.0.0.1", 00:16:29.813 "trsvcid": "35914" 00:16:29.813 }, 00:16:29.813 "auth": { 00:16:29.813 "state": "completed", 00:16:29.813 "digest": "sha256", 00:16:29.813 "dhgroup": "ffdhe2048" 00:16:29.813 } 00:16:29.813 } 00:16:29.813 ]' 00:16:29.813 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.072 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.072 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.072 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.072 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.072 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.072 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.072 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.331 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:30.331 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:30.899 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.899 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.899 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.899 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.899 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.899 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.899 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.899 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.899 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:30.899 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.899 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.899 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:30.899 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:30.899 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.899 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.899 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.899 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.158 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.158 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.158 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.158 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.158 00:16:31.417 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.417 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.417 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.417 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.417 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.417 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.417 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.417 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.417 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.417 { 00:16:31.417 "cntlid": 13, 00:16:31.417 "qid": 0, 00:16:31.417 "state": "enabled", 00:16:31.417 "thread": "nvmf_tgt_poll_group_000", 00:16:31.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.417 "listen_address": { 00:16:31.417 "trtype": "TCP", 00:16:31.417 "adrfam": "IPv4", 00:16:31.417 "traddr": "10.0.0.2", 00:16:31.417 "trsvcid": "4420" 00:16:31.417 }, 00:16:31.417 "peer_address": { 00:16:31.417 "trtype": "TCP", 00:16:31.417 "adrfam": "IPv4", 00:16:31.417 "traddr": "10.0.0.1", 00:16:31.417 "trsvcid": "35950" 00:16:31.417 }, 00:16:31.417 "auth": { 00:16:31.417 "state": "completed", 00:16:31.418 "digest": "sha256", 00:16:31.418 "dhgroup": "ffdhe2048" 00:16:31.418 } 00:16:31.418 } 00:16:31.418 ]' 00:16:31.418 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.418 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.418 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.676 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.676 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.676 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.676 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.676 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.935 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:31.935 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.503 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:32.762 00:16:32.762 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.762 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.762 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.022 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.022 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.022 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.022 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.022 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.022 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.022 { 00:16:33.022 "cntlid": 15, 00:16:33.022 "qid": 0, 00:16:33.022 "state": "enabled", 00:16:33.022 "thread": "nvmf_tgt_poll_group_000", 00:16:33.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.022 "listen_address": { 00:16:33.022 "trtype": "TCP", 00:16:33.022 "adrfam": "IPv4", 00:16:33.022 "traddr": "10.0.0.2", 00:16:33.022 "trsvcid": "4420" 00:16:33.022 }, 00:16:33.022 "peer_address": { 00:16:33.022 "trtype": "TCP", 00:16:33.022 "adrfam": "IPv4", 00:16:33.022 "traddr": "10.0.0.1", 00:16:33.022 "trsvcid": "35980" 00:16:33.022 }, 00:16:33.022 "auth": { 00:16:33.022 "state": "completed", 00:16:33.022 "digest": "sha256", 00:16:33.022 "dhgroup": "ffdhe2048" 00:16:33.022 } 00:16:33.022 } 00:16:33.022 ]' 00:16:33.022 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.022 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.022 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.281 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.281 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.281 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.281 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.281 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.281 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:16:33.281 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:16:33.849 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.849 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.849 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.849 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.849 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.849 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.849 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.849 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:33.849 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.108 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.367 00:16:34.367 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.367 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.367 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.626 { 00:16:34.626 "cntlid": 17, 00:16:34.626 "qid": 0, 00:16:34.626 "state": "enabled", 00:16:34.626 "thread": "nvmf_tgt_poll_group_000", 00:16:34.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.626 "listen_address": { 00:16:34.626 "trtype": "TCP", 00:16:34.626 "adrfam": "IPv4", 00:16:34.626 "traddr": "10.0.0.2", 00:16:34.626 "trsvcid": "4420" 00:16:34.626 }, 00:16:34.626 "peer_address": { 00:16:34.626 "trtype": "TCP", 00:16:34.626 "adrfam": "IPv4", 00:16:34.626 "traddr": "10.0.0.1", 00:16:34.626 "trsvcid": "36010" 00:16:34.626 }, 00:16:34.626 "auth": { 00:16:34.626 "state": "completed", 00:16:34.626 "digest": "sha256", 00:16:34.626 "dhgroup": "ffdhe3072" 00:16:34.626 } 00:16:34.626 } 00:16:34.626 ]' 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.626 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.885 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.885 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.885 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.885 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:34.885 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:35.452 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.452 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.452 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.452 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.452 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.452 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.452 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.452 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.711 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.970 00:16:35.970 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.970 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.970 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.229 { 00:16:36.229 "cntlid": 19, 00:16:36.229 "qid": 0, 00:16:36.229 "state": "enabled", 00:16:36.229 "thread": "nvmf_tgt_poll_group_000", 00:16:36.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.229 "listen_address": { 00:16:36.229 "trtype": "TCP", 00:16:36.229 "adrfam": "IPv4", 00:16:36.229 "traddr": "10.0.0.2", 00:16:36.229 "trsvcid": "4420" 00:16:36.229 }, 00:16:36.229 "peer_address": { 00:16:36.229 "trtype": "TCP", 00:16:36.229 "adrfam": "IPv4", 00:16:36.229 "traddr": "10.0.0.1", 00:16:36.229 "trsvcid": "36038" 00:16:36.229 }, 00:16:36.229 "auth": { 00:16:36.229 "state": "completed", 00:16:36.229 "digest": "sha256", 00:16:36.229 "dhgroup": "ffdhe3072" 00:16:36.229 } 00:16:36.229 } 00:16:36.229 ]' 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.229 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.488 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.488 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.488 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.488 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:36.488 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:37.092 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.092 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.092 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.092 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.092 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.092 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.092 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.092 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.369 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.628 00:16:37.628 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.628 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.628 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.887 { 00:16:37.887 "cntlid": 21, 00:16:37.887 "qid": 0, 00:16:37.887 "state": "enabled", 00:16:37.887 "thread": "nvmf_tgt_poll_group_000", 00:16:37.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.887 "listen_address": { 00:16:37.887 "trtype": "TCP", 00:16:37.887 "adrfam": "IPv4", 00:16:37.887 "traddr": "10.0.0.2", 00:16:37.887 "trsvcid": "4420" 00:16:37.887 }, 00:16:37.887 "peer_address": { 00:16:37.887 "trtype": "TCP", 00:16:37.887 "adrfam": "IPv4", 00:16:37.887 "traddr": "10.0.0.1", 00:16:37.887 "trsvcid": "38146" 00:16:37.887 }, 00:16:37.887 "auth": { 00:16:37.887 "state": "completed", 00:16:37.887 "digest": "sha256", 00:16:37.887 "dhgroup": "ffdhe3072" 00:16:37.887 } 00:16:37.887 } 00:16:37.887 ]' 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:37.887 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.887 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.887 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.887 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.146 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:38.146 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:38.714 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.714 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.714 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.714 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.714 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.714 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.714 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.714 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.972 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:39.231 00:16:39.231 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.231 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.231 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.490 { 00:16:39.490 "cntlid": 23, 00:16:39.490 "qid": 0, 00:16:39.490 "state": "enabled", 00:16:39.490 "thread": "nvmf_tgt_poll_group_000", 00:16:39.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.490 "listen_address": { 00:16:39.490 "trtype": "TCP", 00:16:39.490 "adrfam": "IPv4", 00:16:39.490 "traddr": "10.0.0.2", 00:16:39.490 "trsvcid": "4420" 00:16:39.490 }, 00:16:39.490 "peer_address": { 00:16:39.490 "trtype": "TCP", 00:16:39.490 "adrfam": "IPv4", 00:16:39.490 "traddr": "10.0.0.1", 00:16:39.490 "trsvcid": "38170" 00:16:39.490 }, 00:16:39.490 "auth": { 00:16:39.490 "state": "completed", 00:16:39.490 "digest": "sha256", 00:16:39.490 "dhgroup": "ffdhe3072" 00:16:39.490 } 00:16:39.490 } 00:16:39.490 ]' 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.490 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.748 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:16:39.748 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:16:40.316 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.316 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.316 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.316 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.316 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.316 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:40.316 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.316 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.316 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.575 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.833 00:16:40.833 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.833 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.833 12:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.092 { 00:16:41.092 "cntlid": 25, 00:16:41.092 "qid": 0, 00:16:41.092 "state": "enabled", 00:16:41.092 "thread": "nvmf_tgt_poll_group_000", 00:16:41.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.092 "listen_address": { 00:16:41.092 "trtype": "TCP", 00:16:41.092 "adrfam": "IPv4", 00:16:41.092 "traddr": "10.0.0.2", 00:16:41.092 "trsvcid": "4420" 00:16:41.092 }, 00:16:41.092 "peer_address": { 00:16:41.092 "trtype": "TCP", 00:16:41.092 "adrfam": "IPv4", 00:16:41.092 "traddr": "10.0.0.1", 00:16:41.092 "trsvcid": "38196" 00:16:41.092 }, 00:16:41.092 "auth": { 00:16:41.092 "state": "completed", 00:16:41.092 "digest": "sha256", 00:16:41.092 "dhgroup": "ffdhe4096" 00:16:41.092 } 00:16:41.092 } 00:16:41.092 ]' 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.092 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.350 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:41.350 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:41.917 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.917 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.917 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.917 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.917 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.917 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.917 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.917 12:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.176 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.435 00:16:42.435 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.435 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.435 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.694 { 00:16:42.694 "cntlid": 27, 00:16:42.694 "qid": 0, 00:16:42.694 "state": "enabled", 00:16:42.694 "thread": "nvmf_tgt_poll_group_000", 00:16:42.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.694 "listen_address": { 00:16:42.694 "trtype": "TCP", 00:16:42.694 "adrfam": "IPv4", 00:16:42.694 "traddr": "10.0.0.2", 00:16:42.694 "trsvcid": "4420" 00:16:42.694 }, 00:16:42.694 "peer_address": { 00:16:42.694 "trtype": "TCP", 00:16:42.694 "adrfam": "IPv4", 00:16:42.694 "traddr": "10.0.0.1", 00:16:42.694 "trsvcid": "38220" 00:16:42.694 }, 00:16:42.694 "auth": { 00:16:42.694 "state": "completed", 00:16:42.694 "digest": "sha256", 00:16:42.694 "dhgroup": "ffdhe4096" 00:16:42.694 } 00:16:42.694 } 00:16:42.694 ]' 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.694 12:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.953 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:42.953 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:43.520 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.520 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.520 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.520 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.520 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.520 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.520 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.520 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.779 12:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.038 00:16:44.038 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.038 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.038 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.297 { 00:16:44.297 "cntlid": 29, 00:16:44.297 "qid": 0, 00:16:44.297 "state": "enabled", 00:16:44.297 "thread": "nvmf_tgt_poll_group_000", 00:16:44.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:44.297 "listen_address": { 00:16:44.297 "trtype": "TCP", 00:16:44.297 "adrfam": "IPv4", 00:16:44.297 "traddr": "10.0.0.2", 00:16:44.297 "trsvcid": "4420" 00:16:44.297 }, 00:16:44.297 "peer_address": { 00:16:44.297 "trtype": "TCP", 00:16:44.297 "adrfam": "IPv4", 00:16:44.297 "traddr": "10.0.0.1", 00:16:44.297 "trsvcid": "38244" 00:16:44.297 }, 00:16:44.297 "auth": { 00:16:44.297 "state": "completed", 00:16:44.297 "digest": "sha256", 00:16:44.297 "dhgroup": "ffdhe4096" 00:16:44.297 } 00:16:44.297 } 00:16:44.297 ]' 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.297 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.556 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.556 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.556 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.556 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:44.556 12:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:45.124 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.124 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.124 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.124 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.124 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.124 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.124 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.124 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.383 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:45.383 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.383 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.383 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.384 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.384 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.384 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:45.384 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.384 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.384 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.384 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.384 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.384 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.642 00:16:45.642 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.642 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.642 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.902 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.902 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.902 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.902 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.902 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.902 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.902 { 00:16:45.902 "cntlid": 31, 00:16:45.902 "qid": 0, 00:16:45.902 "state": "enabled", 00:16:45.902 "thread": "nvmf_tgt_poll_group_000", 00:16:45.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.902 "listen_address": { 00:16:45.902 "trtype": "TCP", 00:16:45.902 "adrfam": "IPv4", 00:16:45.902 "traddr": "10.0.0.2", 00:16:45.902 "trsvcid": "4420" 00:16:45.902 }, 00:16:45.902 "peer_address": { 00:16:45.902 "trtype": "TCP", 00:16:45.902 "adrfam": "IPv4", 00:16:45.902 "traddr": "10.0.0.1", 00:16:45.902 "trsvcid": "38274" 00:16:45.902 }, 00:16:45.902 "auth": { 00:16:45.902 "state": "completed", 00:16:45.902 "digest": "sha256", 00:16:45.902 "dhgroup": "ffdhe4096" 00:16:45.902 } 00:16:45.902 } 00:16:45.902 ]' 00:16:45.902 12:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.902 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.902 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.902 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.902 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.160 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.160 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.160 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.160 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:16:46.160 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:16:46.729 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.729 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.729 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.729 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.729 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.729 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.729 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.729 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:46.729 12:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.988 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.556 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.556 { 00:16:47.556 "cntlid": 33, 00:16:47.556 "qid": 0, 00:16:47.556 "state": "enabled", 00:16:47.556 "thread": "nvmf_tgt_poll_group_000", 00:16:47.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.556 "listen_address": { 00:16:47.556 "trtype": "TCP", 00:16:47.556 "adrfam": "IPv4", 00:16:47.556 "traddr": "10.0.0.2", 00:16:47.556 "trsvcid": "4420" 00:16:47.556 }, 00:16:47.556 "peer_address": { 00:16:47.556 "trtype": "TCP", 00:16:47.556 "adrfam": "IPv4", 00:16:47.556 "traddr": "10.0.0.1", 00:16:47.556 "trsvcid": "43104" 00:16:47.556 }, 00:16:47.556 "auth": { 00:16:47.556 "state": "completed", 00:16:47.556 "digest": "sha256", 00:16:47.556 "dhgroup": "ffdhe6144" 00:16:47.556 } 00:16:47.556 } 00:16:47.556 ]' 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.556 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.815 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.815 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.815 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.815 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.815 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.075 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:48.075 12:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.642 12:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.210 00:16:49.210 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.210 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.210 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.210 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.210 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.210 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.210 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.210 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.210 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.210 { 00:16:49.210 "cntlid": 35, 00:16:49.210 "qid": 0, 00:16:49.210 "state": "enabled", 00:16:49.210 "thread": "nvmf_tgt_poll_group_000", 00:16:49.210 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.210 "listen_address": { 00:16:49.210 "trtype": "TCP", 00:16:49.210 "adrfam": "IPv4", 00:16:49.210 "traddr": "10.0.0.2", 00:16:49.210 "trsvcid": "4420" 00:16:49.210 }, 00:16:49.210 "peer_address": { 00:16:49.210 "trtype": "TCP", 00:16:49.210 "adrfam": "IPv4", 00:16:49.210 "traddr": "10.0.0.1", 00:16:49.210 "trsvcid": "43126" 00:16:49.210 }, 00:16:49.210 "auth": { 00:16:49.210 "state": "completed", 00:16:49.210 "digest": "sha256", 00:16:49.210 "dhgroup": "ffdhe6144" 00:16:49.210 } 00:16:49.210 } 00:16:49.210 ]' 00:16:49.210 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.469 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.469 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.469 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.469 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.469 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.469 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.469 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.728 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:49.728 12:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.295 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.862 00:16:50.862 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.862 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.862 12:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.862 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.862 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.862 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.862 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.121 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.122 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.122 { 00:16:51.122 "cntlid": 37, 00:16:51.122 "qid": 0, 00:16:51.122 "state": "enabled", 00:16:51.122 "thread": "nvmf_tgt_poll_group_000", 00:16:51.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.122 "listen_address": { 00:16:51.122 "trtype": "TCP", 00:16:51.122 "adrfam": "IPv4", 00:16:51.122 "traddr": "10.0.0.2", 00:16:51.122 "trsvcid": "4420" 00:16:51.122 }, 00:16:51.122 "peer_address": { 00:16:51.122 "trtype": "TCP", 00:16:51.122 "adrfam": "IPv4", 00:16:51.122 "traddr": "10.0.0.1", 00:16:51.122 "trsvcid": "43154" 00:16:51.122 }, 00:16:51.122 "auth": { 00:16:51.122 "state": "completed", 00:16:51.122 "digest": "sha256", 00:16:51.122 "dhgroup": "ffdhe6144" 00:16:51.122 } 00:16:51.122 } 00:16:51.122 ]' 00:16:51.122 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.122 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.122 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.122 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.122 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.122 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.122 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.122 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.381 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:51.381 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:51.948 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.948 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.948 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.948 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.948 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.948 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.948 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.948 12:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.208 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:52.467 00:16:52.467 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.467 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.467 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.726 { 00:16:52.726 "cntlid": 39, 00:16:52.726 "qid": 0, 00:16:52.726 "state": "enabled", 00:16:52.726 "thread": "nvmf_tgt_poll_group_000", 00:16:52.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.726 "listen_address": { 00:16:52.726 "trtype": "TCP", 00:16:52.726 "adrfam": "IPv4", 00:16:52.726 "traddr": "10.0.0.2", 00:16:52.726 "trsvcid": "4420" 00:16:52.726 }, 00:16:52.726 "peer_address": { 00:16:52.726 "trtype": "TCP", 00:16:52.726 "adrfam": "IPv4", 00:16:52.726 "traddr": "10.0.0.1", 00:16:52.726 "trsvcid": "43176" 00:16:52.726 }, 00:16:52.726 "auth": { 00:16:52.726 "state": "completed", 00:16:52.726 "digest": "sha256", 00:16:52.726 "dhgroup": "ffdhe6144" 00:16:52.726 } 00:16:52.726 } 00:16:52.726 ]' 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.726 12:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.984 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:16:52.984 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:16:53.550 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.550 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.550 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.550 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.550 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.550 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.550 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.550 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.550 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.809 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.810 12:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.376 00:16:54.377 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.377 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.377 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.636 { 00:16:54.636 "cntlid": 41, 00:16:54.636 "qid": 0, 00:16:54.636 "state": "enabled", 00:16:54.636 "thread": "nvmf_tgt_poll_group_000", 00:16:54.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.636 "listen_address": { 00:16:54.636 "trtype": "TCP", 00:16:54.636 "adrfam": "IPv4", 00:16:54.636 "traddr": "10.0.0.2", 00:16:54.636 "trsvcid": "4420" 00:16:54.636 }, 00:16:54.636 "peer_address": { 00:16:54.636 "trtype": "TCP", 00:16:54.636 "adrfam": "IPv4", 00:16:54.636 "traddr": "10.0.0.1", 00:16:54.636 "trsvcid": "43200" 00:16:54.636 }, 00:16:54.636 "auth": { 00:16:54.636 "state": "completed", 00:16:54.636 "digest": "sha256", 00:16:54.636 "dhgroup": "ffdhe8192" 00:16:54.636 } 00:16:54.636 } 00:16:54.636 ]' 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.636 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.894 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:54.894 12:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:16:55.461 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.461 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.461 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.461 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.461 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.461 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.461 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.461 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.719 12:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.287 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.287 { 00:16:56.287 "cntlid": 43, 00:16:56.287 "qid": 0, 00:16:56.287 "state": "enabled", 00:16:56.287 "thread": "nvmf_tgt_poll_group_000", 00:16:56.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.287 "listen_address": { 00:16:56.287 "trtype": "TCP", 00:16:56.287 "adrfam": "IPv4", 00:16:56.287 "traddr": "10.0.0.2", 00:16:56.287 "trsvcid": "4420" 00:16:56.287 }, 00:16:56.287 "peer_address": { 00:16:56.287 "trtype": "TCP", 00:16:56.287 "adrfam": "IPv4", 00:16:56.287 "traddr": "10.0.0.1", 00:16:56.287 "trsvcid": "43216" 00:16:56.287 }, 00:16:56.287 "auth": { 00:16:56.287 "state": "completed", 00:16:56.287 "digest": "sha256", 00:16:56.287 "dhgroup": "ffdhe8192" 00:16:56.287 } 00:16:56.287 } 00:16:56.287 ]' 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.287 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.546 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.546 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.546 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.546 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.546 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.803 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:56.803 12:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.369 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:57.370 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.370 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.370 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.370 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.370 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.370 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.370 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.370 12:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.937 00:16:57.937 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.937 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.937 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.196 { 00:16:58.196 "cntlid": 45, 00:16:58.196 "qid": 0, 00:16:58.196 "state": "enabled", 00:16:58.196 "thread": "nvmf_tgt_poll_group_000", 00:16:58.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:58.196 "listen_address": { 00:16:58.196 "trtype": "TCP", 00:16:58.196 "adrfam": "IPv4", 00:16:58.196 "traddr": "10.0.0.2", 00:16:58.196 "trsvcid": "4420" 00:16:58.196 }, 00:16:58.196 "peer_address": { 00:16:58.196 "trtype": "TCP", 00:16:58.196 "adrfam": "IPv4", 00:16:58.196 "traddr": "10.0.0.1", 00:16:58.196 "trsvcid": "49722" 00:16:58.196 }, 00:16:58.196 "auth": { 00:16:58.196 "state": "completed", 00:16:58.196 "digest": "sha256", 00:16:58.196 "dhgroup": "ffdhe8192" 00:16:58.196 } 00:16:58.196 } 00:16:58.196 ]' 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.196 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.455 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.455 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.455 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.455 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:58.455 12:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:16:59.022 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.022 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.022 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.022 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.022 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.022 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.022 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.022 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.281 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:59.848 00:16:59.848 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.848 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.848 12:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.107 { 00:17:00.107 "cntlid": 47, 00:17:00.107 "qid": 0, 00:17:00.107 "state": "enabled", 00:17:00.107 "thread": "nvmf_tgt_poll_group_000", 00:17:00.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:00.107 "listen_address": { 00:17:00.107 "trtype": "TCP", 00:17:00.107 "adrfam": "IPv4", 00:17:00.107 "traddr": "10.0.0.2", 00:17:00.107 "trsvcid": "4420" 00:17:00.107 }, 00:17:00.107 "peer_address": { 00:17:00.107 "trtype": "TCP", 00:17:00.107 "adrfam": "IPv4", 00:17:00.107 "traddr": "10.0.0.1", 00:17:00.107 "trsvcid": "49740" 00:17:00.107 }, 00:17:00.107 "auth": { 00:17:00.107 "state": "completed", 00:17:00.107 "digest": "sha256", 00:17:00.107 "dhgroup": "ffdhe8192" 00:17:00.107 } 00:17:00.107 } 00:17:00.107 ]' 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.107 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.366 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:00.366 12:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:00.933 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.933 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.933 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.933 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.933 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.933 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:00.933 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.933 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.933 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.933 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.192 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.451 00:17:01.451 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.451 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.451 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.710 { 00:17:01.710 "cntlid": 49, 00:17:01.710 "qid": 0, 00:17:01.710 "state": "enabled", 00:17:01.710 "thread": "nvmf_tgt_poll_group_000", 00:17:01.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.710 "listen_address": { 00:17:01.710 "trtype": "TCP", 00:17:01.710 "adrfam": "IPv4", 00:17:01.710 "traddr": "10.0.0.2", 00:17:01.710 "trsvcid": "4420" 00:17:01.710 }, 00:17:01.710 "peer_address": { 00:17:01.710 "trtype": "TCP", 00:17:01.710 "adrfam": "IPv4", 00:17:01.710 "traddr": "10.0.0.1", 00:17:01.710 "trsvcid": "49770" 00:17:01.710 }, 00:17:01.710 "auth": { 00:17:01.710 "state": "completed", 00:17:01.710 "digest": "sha384", 00:17:01.710 "dhgroup": "null" 00:17:01.710 } 00:17:01.710 } 00:17:01.710 ]' 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.710 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.969 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:01.969 12:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:02.536 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.536 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.536 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.536 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.536 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.536 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.536 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.536 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.795 12:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.054 00:17:03.054 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.054 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.054 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.312 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.312 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.312 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.313 { 00:17:03.313 "cntlid": 51, 00:17:03.313 "qid": 0, 00:17:03.313 "state": "enabled", 00:17:03.313 "thread": "nvmf_tgt_poll_group_000", 00:17:03.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.313 "listen_address": { 00:17:03.313 "trtype": "TCP", 00:17:03.313 "adrfam": "IPv4", 00:17:03.313 "traddr": "10.0.0.2", 00:17:03.313 "trsvcid": "4420" 00:17:03.313 }, 00:17:03.313 "peer_address": { 00:17:03.313 "trtype": "TCP", 00:17:03.313 "adrfam": "IPv4", 00:17:03.313 "traddr": "10.0.0.1", 00:17:03.313 "trsvcid": "49794" 00:17:03.313 }, 00:17:03.313 "auth": { 00:17:03.313 "state": "completed", 00:17:03.313 "digest": "sha384", 00:17:03.313 "dhgroup": "null" 00:17:03.313 } 00:17:03.313 } 00:17:03.313 ]' 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.313 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.572 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:03.572 12:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:04.140 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.140 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.140 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.140 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.140 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.140 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:04.140 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.140 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:04.398 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:04.398 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.399 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.658 00:17:04.658 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.658 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.658 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.916 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.916 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.916 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.916 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.916 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.916 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.916 { 00:17:04.916 "cntlid": 53, 00:17:04.916 "qid": 0, 00:17:04.916 "state": "enabled", 00:17:04.916 "thread": "nvmf_tgt_poll_group_000", 00:17:04.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.916 "listen_address": { 00:17:04.916 "trtype": "TCP", 00:17:04.916 "adrfam": "IPv4", 00:17:04.916 "traddr": "10.0.0.2", 00:17:04.916 "trsvcid": "4420" 00:17:04.916 }, 00:17:04.916 "peer_address": { 00:17:04.916 "trtype": "TCP", 00:17:04.916 "adrfam": "IPv4", 00:17:04.916 "traddr": "10.0.0.1", 00:17:04.916 "trsvcid": "49814" 00:17:04.916 }, 00:17:04.916 "auth": { 00:17:04.916 "state": "completed", 00:17:04.916 "digest": "sha384", 00:17:04.916 "dhgroup": "null" 00:17:04.916 } 00:17:04.916 } 00:17:04.916 ]' 00:17:04.916 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.916 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.916 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.917 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:04.917 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.917 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.917 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.917 12:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.175 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:05.175 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:05.743 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.743 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.743 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.743 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.743 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.743 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.743 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.743 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.002 12:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.261 00:17:06.261 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.261 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.261 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.261 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.261 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.261 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.261 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.520 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.520 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.520 { 00:17:06.520 "cntlid": 55, 00:17:06.520 "qid": 0, 00:17:06.520 "state": "enabled", 00:17:06.520 "thread": "nvmf_tgt_poll_group_000", 00:17:06.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.520 "listen_address": { 00:17:06.520 "trtype": "TCP", 00:17:06.520 "adrfam": "IPv4", 00:17:06.520 "traddr": "10.0.0.2", 00:17:06.520 "trsvcid": "4420" 00:17:06.520 }, 00:17:06.520 "peer_address": { 00:17:06.520 "trtype": "TCP", 00:17:06.520 "adrfam": "IPv4", 00:17:06.520 "traddr": "10.0.0.1", 00:17:06.520 "trsvcid": "49838" 00:17:06.520 }, 00:17:06.520 "auth": { 00:17:06.520 "state": "completed", 00:17:06.520 "digest": "sha384", 00:17:06.520 "dhgroup": "null" 00:17:06.520 } 00:17:06.520 } 00:17:06.520 ]' 00:17:06.520 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.520 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.520 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.520 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.520 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.520 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.521 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.521 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.780 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:06.780 12:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:07.347 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.347 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.347 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.347 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.347 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.347 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.347 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.347 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.347 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.606 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.865 00:17:07.865 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.865 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.865 12:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.865 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.865 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.865 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.865 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.865 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.865 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.865 { 00:17:07.865 "cntlid": 57, 00:17:07.865 "qid": 0, 00:17:07.865 "state": "enabled", 00:17:07.865 "thread": "nvmf_tgt_poll_group_000", 00:17:07.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.865 "listen_address": { 00:17:07.865 "trtype": "TCP", 00:17:07.865 "adrfam": "IPv4", 00:17:07.865 "traddr": "10.0.0.2", 00:17:07.865 "trsvcid": "4420" 00:17:07.865 }, 00:17:07.865 "peer_address": { 00:17:07.865 "trtype": "TCP", 00:17:07.865 "adrfam": "IPv4", 00:17:07.865 "traddr": "10.0.0.1", 00:17:07.865 "trsvcid": "57184" 00:17:07.865 }, 00:17:07.865 "auth": { 00:17:07.865 "state": "completed", 00:17:07.865 "digest": "sha384", 00:17:07.865 "dhgroup": "ffdhe2048" 00:17:07.865 } 00:17:07.865 } 00:17:07.865 ]' 00:17:07.865 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.124 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.124 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.124 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.124 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.124 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.124 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.124 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.382 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:08.382 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:08.948 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.948 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.948 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.948 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.948 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.948 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.948 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.948 12:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.206 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:09.206 00:17:09.464 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.464 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.464 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.464 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.464 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.464 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.464 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.464 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.464 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.464 { 00:17:09.464 "cntlid": 59, 00:17:09.464 "qid": 0, 00:17:09.464 "state": "enabled", 00:17:09.464 "thread": "nvmf_tgt_poll_group_000", 00:17:09.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.464 "listen_address": { 00:17:09.464 "trtype": "TCP", 00:17:09.464 "adrfam": "IPv4", 00:17:09.464 "traddr": "10.0.0.2", 00:17:09.464 "trsvcid": "4420" 00:17:09.464 }, 00:17:09.465 "peer_address": { 00:17:09.465 "trtype": "TCP", 00:17:09.465 "adrfam": "IPv4", 00:17:09.465 "traddr": "10.0.0.1", 00:17:09.465 "trsvcid": "57206" 00:17:09.465 }, 00:17:09.465 "auth": { 00:17:09.465 "state": "completed", 00:17:09.465 "digest": "sha384", 00:17:09.465 "dhgroup": "ffdhe2048" 00:17:09.465 } 00:17:09.465 } 00:17:09.465 ]' 00:17:09.465 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.723 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.723 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.723 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.723 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.723 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.723 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.723 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.981 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:09.981 12:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:10.548 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.548 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.548 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.548 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.548 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.548 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.548 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.548 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.806 12:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.064 00:17:11.064 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.064 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.064 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.065 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.065 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.065 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.065 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.323 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.323 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.323 { 00:17:11.323 "cntlid": 61, 00:17:11.323 "qid": 0, 00:17:11.323 "state": "enabled", 00:17:11.323 "thread": "nvmf_tgt_poll_group_000", 00:17:11.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:11.323 "listen_address": { 00:17:11.323 "trtype": "TCP", 00:17:11.323 "adrfam": "IPv4", 00:17:11.323 "traddr": "10.0.0.2", 00:17:11.323 "trsvcid": "4420" 00:17:11.323 }, 00:17:11.323 "peer_address": { 00:17:11.323 "trtype": "TCP", 00:17:11.323 "adrfam": "IPv4", 00:17:11.323 "traddr": "10.0.0.1", 00:17:11.323 "trsvcid": "57240" 00:17:11.323 }, 00:17:11.323 "auth": { 00:17:11.323 "state": "completed", 00:17:11.323 "digest": "sha384", 00:17:11.323 "dhgroup": "ffdhe2048" 00:17:11.323 } 00:17:11.323 } 00:17:11.323 ]' 00:17:11.323 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.323 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.323 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.323 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.323 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.323 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.323 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.323 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.582 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:11.582 12:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:12.150 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.150 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.150 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.150 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.150 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.150 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.150 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.150 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.408 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:12.667 00:17:12.667 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.667 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.667 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.667 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.667 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.667 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.667 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.925 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.925 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.925 { 00:17:12.925 "cntlid": 63, 00:17:12.925 "qid": 0, 00:17:12.925 "state": "enabled", 00:17:12.925 "thread": "nvmf_tgt_poll_group_000", 00:17:12.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.925 "listen_address": { 00:17:12.925 "trtype": "TCP", 00:17:12.925 "adrfam": "IPv4", 00:17:12.925 "traddr": "10.0.0.2", 00:17:12.925 "trsvcid": "4420" 00:17:12.925 }, 00:17:12.925 "peer_address": { 00:17:12.925 "trtype": "TCP", 00:17:12.925 "adrfam": "IPv4", 00:17:12.925 "traddr": "10.0.0.1", 00:17:12.925 "trsvcid": "57262" 00:17:12.925 }, 00:17:12.925 "auth": { 00:17:12.925 "state": "completed", 00:17:12.925 "digest": "sha384", 00:17:12.925 "dhgroup": "ffdhe2048" 00:17:12.925 } 00:17:12.925 } 00:17:12.925 ]' 00:17:12.925 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.925 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.925 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.925 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.925 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.925 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.925 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.925 12:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.183 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:13.183 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:13.750 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.750 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.750 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.750 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.750 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.750 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.750 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.750 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.750 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.008 12:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.267 00:17:14.267 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.267 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.267 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.526 { 00:17:14.526 "cntlid": 65, 00:17:14.526 "qid": 0, 00:17:14.526 "state": "enabled", 00:17:14.526 "thread": "nvmf_tgt_poll_group_000", 00:17:14.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:14.526 "listen_address": { 00:17:14.526 "trtype": "TCP", 00:17:14.526 "adrfam": "IPv4", 00:17:14.526 "traddr": "10.0.0.2", 00:17:14.526 "trsvcid": "4420" 00:17:14.526 }, 00:17:14.526 "peer_address": { 00:17:14.526 "trtype": "TCP", 00:17:14.526 "adrfam": "IPv4", 00:17:14.526 "traddr": "10.0.0.1", 00:17:14.526 "trsvcid": "57296" 00:17:14.526 }, 00:17:14.526 "auth": { 00:17:14.526 "state": "completed", 00:17:14.526 "digest": "sha384", 00:17:14.526 "dhgroup": "ffdhe3072" 00:17:14.526 } 00:17:14.526 } 00:17:14.526 ]' 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.526 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.788 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:14.788 12:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.405 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.698 00:17:15.698 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.698 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.698 12:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.967 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.967 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.967 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.967 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.967 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.967 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.967 { 00:17:15.967 "cntlid": 67, 00:17:15.967 "qid": 0, 00:17:15.967 "state": "enabled", 00:17:15.967 "thread": "nvmf_tgt_poll_group_000", 00:17:15.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:15.967 "listen_address": { 00:17:15.967 "trtype": "TCP", 00:17:15.967 "adrfam": "IPv4", 00:17:15.967 "traddr": "10.0.0.2", 00:17:15.967 "trsvcid": "4420" 00:17:15.967 }, 00:17:15.967 "peer_address": { 00:17:15.967 "trtype": "TCP", 00:17:15.967 "adrfam": "IPv4", 00:17:15.967 "traddr": "10.0.0.1", 00:17:15.967 "trsvcid": "57328" 00:17:15.967 }, 00:17:15.967 "auth": { 00:17:15.967 "state": "completed", 00:17:15.967 "digest": "sha384", 00:17:15.967 "dhgroup": "ffdhe3072" 00:17:15.967 } 00:17:15.967 } 00:17:15.967 ]' 00:17:15.967 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.967 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.967 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.226 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.226 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.226 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.226 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.226 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.226 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:16.484 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:17.051 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.051 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.051 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.051 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.051 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.051 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.051 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.051 12:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.051 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.309 00:17:17.309 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.309 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.309 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.568 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.568 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.568 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.568 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.568 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.568 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.568 { 00:17:17.568 "cntlid": 69, 00:17:17.568 "qid": 0, 00:17:17.568 "state": "enabled", 00:17:17.568 "thread": "nvmf_tgt_poll_group_000", 00:17:17.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.568 "listen_address": { 00:17:17.568 "trtype": "TCP", 00:17:17.568 "adrfam": "IPv4", 00:17:17.568 "traddr": "10.0.0.2", 00:17:17.568 "trsvcid": "4420" 00:17:17.568 }, 00:17:17.568 "peer_address": { 00:17:17.568 "trtype": "TCP", 00:17:17.568 "adrfam": "IPv4", 00:17:17.568 "traddr": "10.0.0.1", 00:17:17.568 "trsvcid": "33568" 00:17:17.568 }, 00:17:17.568 "auth": { 00:17:17.568 "state": "completed", 00:17:17.568 "digest": "sha384", 00:17:17.568 "dhgroup": "ffdhe3072" 00:17:17.568 } 00:17:17.568 } 00:17:17.568 ]' 00:17:17.568 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.568 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.568 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.825 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.825 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.825 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.825 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.825 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.081 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:18.081 12:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.647 12:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:18.905 00:17:18.905 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.905 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.906 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.164 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.164 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.164 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.164 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.164 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.164 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.164 { 00:17:19.164 "cntlid": 71, 00:17:19.164 "qid": 0, 00:17:19.164 "state": "enabled", 00:17:19.164 "thread": "nvmf_tgt_poll_group_000", 00:17:19.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:19.164 "listen_address": { 00:17:19.164 "trtype": "TCP", 00:17:19.164 "adrfam": "IPv4", 00:17:19.164 "traddr": "10.0.0.2", 00:17:19.164 "trsvcid": "4420" 00:17:19.164 }, 00:17:19.164 "peer_address": { 00:17:19.164 "trtype": "TCP", 00:17:19.164 "adrfam": "IPv4", 00:17:19.164 "traddr": "10.0.0.1", 00:17:19.164 "trsvcid": "33592" 00:17:19.164 }, 00:17:19.164 "auth": { 00:17:19.164 "state": "completed", 00:17:19.164 "digest": "sha384", 00:17:19.164 "dhgroup": "ffdhe3072" 00:17:19.164 } 00:17:19.164 } 00:17:19.164 ]' 00:17:19.164 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.164 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.164 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.423 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.423 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.423 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.423 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.423 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.681 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:19.681 12:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.249 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.530 00:17:20.530 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.530 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.530 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.789 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.789 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.789 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.789 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.789 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.789 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.789 { 00:17:20.789 "cntlid": 73, 00:17:20.789 "qid": 0, 00:17:20.789 "state": "enabled", 00:17:20.789 "thread": "nvmf_tgt_poll_group_000", 00:17:20.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:20.789 "listen_address": { 00:17:20.789 "trtype": "TCP", 00:17:20.789 "adrfam": "IPv4", 00:17:20.789 "traddr": "10.0.0.2", 00:17:20.789 "trsvcid": "4420" 00:17:20.789 }, 00:17:20.789 "peer_address": { 00:17:20.789 "trtype": "TCP", 00:17:20.789 "adrfam": "IPv4", 00:17:20.789 "traddr": "10.0.0.1", 00:17:20.789 "trsvcid": "33618" 00:17:20.789 }, 00:17:20.789 "auth": { 00:17:20.789 "state": "completed", 00:17:20.789 "digest": "sha384", 00:17:20.789 "dhgroup": "ffdhe4096" 00:17:20.789 } 00:17:20.789 } 00:17:20.789 ]' 00:17:20.789 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.789 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.789 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.048 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.048 12:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.048 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.048 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.048 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.307 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:21.307 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.874 12:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.133 00:17:22.133 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.133 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.133 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.391 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.391 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.391 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.391 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.391 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.391 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.392 { 00:17:22.392 "cntlid": 75, 00:17:22.392 "qid": 0, 00:17:22.392 "state": "enabled", 00:17:22.392 "thread": "nvmf_tgt_poll_group_000", 00:17:22.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:22.392 "listen_address": { 00:17:22.392 "trtype": "TCP", 00:17:22.392 "adrfam": "IPv4", 00:17:22.392 "traddr": "10.0.0.2", 00:17:22.392 "trsvcid": "4420" 00:17:22.392 }, 00:17:22.392 "peer_address": { 00:17:22.392 "trtype": "TCP", 00:17:22.392 "adrfam": "IPv4", 00:17:22.392 "traddr": "10.0.0.1", 00:17:22.392 "trsvcid": "33642" 00:17:22.392 }, 00:17:22.392 "auth": { 00:17:22.392 "state": "completed", 00:17:22.392 "digest": "sha384", 00:17:22.392 "dhgroup": "ffdhe4096" 00:17:22.392 } 00:17:22.392 } 00:17:22.392 ]' 00:17:22.392 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.392 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.392 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.650 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.650 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.650 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.650 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.650 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.908 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:22.908 12:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.475 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.736 00:17:23.736 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.736 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.736 12:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.994 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.994 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.994 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.994 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.995 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.995 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.995 { 00:17:23.995 "cntlid": 77, 00:17:23.995 "qid": 0, 00:17:23.995 "state": "enabled", 00:17:23.995 "thread": "nvmf_tgt_poll_group_000", 00:17:23.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:23.995 "listen_address": { 00:17:23.995 "trtype": "TCP", 00:17:23.995 "adrfam": "IPv4", 00:17:23.995 "traddr": "10.0.0.2", 00:17:23.995 "trsvcid": "4420" 00:17:23.995 }, 00:17:23.995 "peer_address": { 00:17:23.995 "trtype": "TCP", 00:17:23.995 "adrfam": "IPv4", 00:17:23.995 "traddr": "10.0.0.1", 00:17:23.995 "trsvcid": "33668" 00:17:23.995 }, 00:17:23.995 "auth": { 00:17:23.995 "state": "completed", 00:17:23.995 "digest": "sha384", 00:17:23.995 "dhgroup": "ffdhe4096" 00:17:23.995 } 00:17:23.995 } 00:17:23.995 ]' 00:17:23.995 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.995 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.995 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.254 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.254 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.254 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.254 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.254 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.512 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:24.512 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:25.080 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.080 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.080 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.080 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.080 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.080 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.080 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.080 12:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.080 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.339 00:17:25.339 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.339 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.339 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.598 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.598 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.598 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.598 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.598 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.598 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.598 { 00:17:25.598 "cntlid": 79, 00:17:25.598 "qid": 0, 00:17:25.598 "state": "enabled", 00:17:25.598 "thread": "nvmf_tgt_poll_group_000", 00:17:25.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:25.598 "listen_address": { 00:17:25.598 "trtype": "TCP", 00:17:25.598 "adrfam": "IPv4", 00:17:25.598 "traddr": "10.0.0.2", 00:17:25.598 "trsvcid": "4420" 00:17:25.598 }, 00:17:25.598 "peer_address": { 00:17:25.598 "trtype": "TCP", 00:17:25.598 "adrfam": "IPv4", 00:17:25.598 "traddr": "10.0.0.1", 00:17:25.598 "trsvcid": "33702" 00:17:25.598 }, 00:17:25.598 "auth": { 00:17:25.598 "state": "completed", 00:17:25.598 "digest": "sha384", 00:17:25.598 "dhgroup": "ffdhe4096" 00:17:25.598 } 00:17:25.598 } 00:17:25.598 ]' 00:17:25.598 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.598 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.598 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.857 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.857 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.857 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.857 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.857 12:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.115 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:26.115 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.682 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.683 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.683 12:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.250 00:17:27.250 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.250 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.250 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.250 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.250 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.250 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.250 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.250 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.250 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.250 { 00:17:27.250 "cntlid": 81, 00:17:27.250 "qid": 0, 00:17:27.250 "state": "enabled", 00:17:27.250 "thread": "nvmf_tgt_poll_group_000", 00:17:27.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:27.250 "listen_address": { 00:17:27.250 "trtype": "TCP", 00:17:27.250 "adrfam": "IPv4", 00:17:27.250 "traddr": "10.0.0.2", 00:17:27.250 "trsvcid": "4420" 00:17:27.250 }, 00:17:27.250 "peer_address": { 00:17:27.250 "trtype": "TCP", 00:17:27.250 "adrfam": "IPv4", 00:17:27.250 "traddr": "10.0.0.1", 00:17:27.250 "trsvcid": "57896" 00:17:27.250 }, 00:17:27.250 "auth": { 00:17:27.250 "state": "completed", 00:17:27.250 "digest": "sha384", 00:17:27.250 "dhgroup": "ffdhe6144" 00:17:27.250 } 00:17:27.250 } 00:17:27.250 ]' 00:17:27.250 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.509 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.509 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.509 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:27.509 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.509 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.509 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.509 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.767 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:27.767 12:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:28.337 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.337 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.337 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.337 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.337 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.337 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.337 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.337 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.595 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.854 00:17:28.854 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.854 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.854 12:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.112 { 00:17:29.112 "cntlid": 83, 00:17:29.112 "qid": 0, 00:17:29.112 "state": "enabled", 00:17:29.112 "thread": "nvmf_tgt_poll_group_000", 00:17:29.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:29.112 "listen_address": { 00:17:29.112 "trtype": "TCP", 00:17:29.112 "adrfam": "IPv4", 00:17:29.112 "traddr": "10.0.0.2", 00:17:29.112 "trsvcid": "4420" 00:17:29.112 }, 00:17:29.112 "peer_address": { 00:17:29.112 "trtype": "TCP", 00:17:29.112 "adrfam": "IPv4", 00:17:29.112 "traddr": "10.0.0.1", 00:17:29.112 "trsvcid": "57922" 00:17:29.112 }, 00:17:29.112 "auth": { 00:17:29.112 "state": "completed", 00:17:29.112 "digest": "sha384", 00:17:29.112 "dhgroup": "ffdhe6144" 00:17:29.112 } 00:17:29.112 } 00:17:29.112 ]' 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.112 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.371 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:29.371 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:29.938 12:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.938 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.938 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.938 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.938 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.938 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.938 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.938 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.197 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.456 00:17:30.456 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.456 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.456 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.714 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.714 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.714 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.714 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.714 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.714 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.714 { 00:17:30.714 "cntlid": 85, 00:17:30.714 "qid": 0, 00:17:30.714 "state": "enabled", 00:17:30.714 "thread": "nvmf_tgt_poll_group_000", 00:17:30.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:30.715 "listen_address": { 00:17:30.715 "trtype": "TCP", 00:17:30.715 "adrfam": "IPv4", 00:17:30.715 "traddr": "10.0.0.2", 00:17:30.715 "trsvcid": "4420" 00:17:30.715 }, 00:17:30.715 "peer_address": { 00:17:30.715 "trtype": "TCP", 00:17:30.715 "adrfam": "IPv4", 00:17:30.715 "traddr": "10.0.0.1", 00:17:30.715 "trsvcid": "57946" 00:17:30.715 }, 00:17:30.715 "auth": { 00:17:30.715 "state": "completed", 00:17:30.715 "digest": "sha384", 00:17:30.715 "dhgroup": "ffdhe6144" 00:17:30.715 } 00:17:30.715 } 00:17:30.715 ]' 00:17:30.715 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.715 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.715 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.715 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.715 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.973 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.973 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.973 12:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.973 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:30.973 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:31.540 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.540 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:31.540 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.540 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:31.799 12:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.366 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.366 { 00:17:32.366 "cntlid": 87, 00:17:32.366 "qid": 0, 00:17:32.366 "state": "enabled", 00:17:32.366 "thread": "nvmf_tgt_poll_group_000", 00:17:32.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:32.366 "listen_address": { 00:17:32.366 "trtype": "TCP", 00:17:32.366 "adrfam": "IPv4", 00:17:32.366 "traddr": "10.0.0.2", 00:17:32.366 "trsvcid": "4420" 00:17:32.366 }, 00:17:32.366 "peer_address": { 00:17:32.366 "trtype": "TCP", 00:17:32.366 "adrfam": "IPv4", 00:17:32.366 "traddr": "10.0.0.1", 00:17:32.366 "trsvcid": "57976" 00:17:32.366 }, 00:17:32.366 "auth": { 00:17:32.366 "state": "completed", 00:17:32.366 "digest": "sha384", 00:17:32.366 "dhgroup": "ffdhe6144" 00:17:32.366 } 00:17:32.366 } 00:17:32.366 ]' 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.366 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.625 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.625 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.625 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.625 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.625 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.625 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:32.625 12:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:33.191 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.450 12:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.017 00:17:34.017 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.017 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.017 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.276 { 00:17:34.276 "cntlid": 89, 00:17:34.276 "qid": 0, 00:17:34.276 "state": "enabled", 00:17:34.276 "thread": "nvmf_tgt_poll_group_000", 00:17:34.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:34.276 "listen_address": { 00:17:34.276 "trtype": "TCP", 00:17:34.276 "adrfam": "IPv4", 00:17:34.276 "traddr": "10.0.0.2", 00:17:34.276 "trsvcid": "4420" 00:17:34.276 }, 00:17:34.276 "peer_address": { 00:17:34.276 "trtype": "TCP", 00:17:34.276 "adrfam": "IPv4", 00:17:34.276 "traddr": "10.0.0.1", 00:17:34.276 "trsvcid": "58008" 00:17:34.276 }, 00:17:34.276 "auth": { 00:17:34.276 "state": "completed", 00:17:34.276 "digest": "sha384", 00:17:34.276 "dhgroup": "ffdhe8192" 00:17:34.276 } 00:17:34.276 } 00:17:34.276 ]' 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.276 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.534 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:34.534 12:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:35.101 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.101 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.101 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.101 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.101 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.101 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.101 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.101 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.360 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.928 00:17:35.928 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.928 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.928 12:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.928 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.928 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.928 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.928 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.187 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.187 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.187 { 00:17:36.187 "cntlid": 91, 00:17:36.187 "qid": 0, 00:17:36.187 "state": "enabled", 00:17:36.187 "thread": "nvmf_tgt_poll_group_000", 00:17:36.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:36.187 "listen_address": { 00:17:36.187 "trtype": "TCP", 00:17:36.187 "adrfam": "IPv4", 00:17:36.187 "traddr": "10.0.0.2", 00:17:36.187 "trsvcid": "4420" 00:17:36.187 }, 00:17:36.187 "peer_address": { 00:17:36.187 "trtype": "TCP", 00:17:36.187 "adrfam": "IPv4", 00:17:36.187 "traddr": "10.0.0.1", 00:17:36.187 "trsvcid": "58040" 00:17:36.187 }, 00:17:36.187 "auth": { 00:17:36.187 "state": "completed", 00:17:36.187 "digest": "sha384", 00:17:36.187 "dhgroup": "ffdhe8192" 00:17:36.187 } 00:17:36.187 } 00:17:36.187 ]' 00:17:36.187 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.187 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.187 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.187 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.187 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.187 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.187 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.187 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.446 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:36.446 12:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:37.013 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.013 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.013 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.013 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.013 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.013 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.013 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.013 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.272 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.839 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.839 { 00:17:37.839 "cntlid": 93, 00:17:37.839 "qid": 0, 00:17:37.839 "state": "enabled", 00:17:37.839 "thread": "nvmf_tgt_poll_group_000", 00:17:37.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:37.839 "listen_address": { 00:17:37.839 "trtype": "TCP", 00:17:37.839 "adrfam": "IPv4", 00:17:37.839 "traddr": "10.0.0.2", 00:17:37.839 "trsvcid": "4420" 00:17:37.839 }, 00:17:37.839 "peer_address": { 00:17:37.839 "trtype": "TCP", 00:17:37.839 "adrfam": "IPv4", 00:17:37.839 "traddr": "10.0.0.1", 00:17:37.839 "trsvcid": "45708" 00:17:37.839 }, 00:17:37.839 "auth": { 00:17:37.839 "state": "completed", 00:17:37.839 "digest": "sha384", 00:17:37.839 "dhgroup": "ffdhe8192" 00:17:37.839 } 00:17:37.839 } 00:17:37.839 ]' 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.839 12:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.098 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.098 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.098 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.098 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.098 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.357 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:38.357 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:38.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.924 12:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.182 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.440 00:17:39.698 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.698 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.698 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.698 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.698 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.699 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.699 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.699 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.699 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.699 { 00:17:39.699 "cntlid": 95, 00:17:39.699 "qid": 0, 00:17:39.699 "state": "enabled", 00:17:39.699 "thread": "nvmf_tgt_poll_group_000", 00:17:39.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.699 "listen_address": { 00:17:39.699 "trtype": "TCP", 00:17:39.699 "adrfam": "IPv4", 00:17:39.699 "traddr": "10.0.0.2", 00:17:39.699 "trsvcid": "4420" 00:17:39.699 }, 00:17:39.699 "peer_address": { 00:17:39.699 "trtype": "TCP", 00:17:39.699 "adrfam": "IPv4", 00:17:39.699 "traddr": "10.0.0.1", 00:17:39.699 "trsvcid": "45726" 00:17:39.699 }, 00:17:39.699 "auth": { 00:17:39.699 "state": "completed", 00:17:39.699 "digest": "sha384", 00:17:39.699 "dhgroup": "ffdhe8192" 00:17:39.699 } 00:17:39.699 } 00:17:39.699 ]' 00:17:39.699 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.957 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.957 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.957 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.957 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.957 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.957 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.957 12:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.216 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:40.216 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:40.783 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.783 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.783 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.783 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.783 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.783 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:40.783 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.783 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.783 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.783 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.042 12:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.042 00:17:41.300 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.300 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.300 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.300 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.300 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.300 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.300 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.300 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.300 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.300 { 00:17:41.300 "cntlid": 97, 00:17:41.300 "qid": 0, 00:17:41.300 "state": "enabled", 00:17:41.300 "thread": "nvmf_tgt_poll_group_000", 00:17:41.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.300 "listen_address": { 00:17:41.300 "trtype": "TCP", 00:17:41.300 "adrfam": "IPv4", 00:17:41.300 "traddr": "10.0.0.2", 00:17:41.300 "trsvcid": "4420" 00:17:41.300 }, 00:17:41.300 "peer_address": { 00:17:41.300 "trtype": "TCP", 00:17:41.300 "adrfam": "IPv4", 00:17:41.300 "traddr": "10.0.0.1", 00:17:41.300 "trsvcid": "45744" 00:17:41.300 }, 00:17:41.300 "auth": { 00:17:41.300 "state": "completed", 00:17:41.300 "digest": "sha512", 00:17:41.300 "dhgroup": "null" 00:17:41.300 } 00:17:41.300 } 00:17:41.300 ]' 00:17:41.300 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.558 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.558 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.559 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:41.559 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.559 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.559 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.559 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.817 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:41.817 12:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.384 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.642 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.642 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.642 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.642 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.642 00:17:42.642 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.642 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.642 12:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.901 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.901 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.901 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.901 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.901 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.901 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.901 { 00:17:42.901 "cntlid": 99, 00:17:42.901 "qid": 0, 00:17:42.901 "state": "enabled", 00:17:42.901 "thread": "nvmf_tgt_poll_group_000", 00:17:42.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.901 "listen_address": { 00:17:42.901 "trtype": "TCP", 00:17:42.901 "adrfam": "IPv4", 00:17:42.901 "traddr": "10.0.0.2", 00:17:42.901 "trsvcid": "4420" 00:17:42.901 }, 00:17:42.901 "peer_address": { 00:17:42.901 "trtype": "TCP", 00:17:42.901 "adrfam": "IPv4", 00:17:42.901 "traddr": "10.0.0.1", 00:17:42.901 "trsvcid": "45776" 00:17:42.901 }, 00:17:42.901 "auth": { 00:17:42.901 "state": "completed", 00:17:42.901 "digest": "sha512", 00:17:42.901 "dhgroup": "null" 00:17:42.901 } 00:17:42.901 } 00:17:42.901 ]' 00:17:42.901 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.901 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.901 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.159 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:43.159 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.159 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.159 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.159 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.417 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:43.417 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:43.984 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.984 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.984 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.984 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.984 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.984 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.984 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.984 12:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.984 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:44.243 00:17:44.243 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.243 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.243 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.501 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.501 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.501 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.501 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.501 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.501 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.501 { 00:17:44.501 "cntlid": 101, 00:17:44.501 "qid": 0, 00:17:44.501 "state": "enabled", 00:17:44.501 "thread": "nvmf_tgt_poll_group_000", 00:17:44.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.501 "listen_address": { 00:17:44.501 "trtype": "TCP", 00:17:44.501 "adrfam": "IPv4", 00:17:44.501 "traddr": "10.0.0.2", 00:17:44.501 "trsvcid": "4420" 00:17:44.501 }, 00:17:44.501 "peer_address": { 00:17:44.501 "trtype": "TCP", 00:17:44.501 "adrfam": "IPv4", 00:17:44.501 "traddr": "10.0.0.1", 00:17:44.501 "trsvcid": "45808" 00:17:44.501 }, 00:17:44.501 "auth": { 00:17:44.501 "state": "completed", 00:17:44.501 "digest": "sha512", 00:17:44.501 "dhgroup": "null" 00:17:44.501 } 00:17:44.501 } 00:17:44.501 ]' 00:17:44.501 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.501 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.501 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.760 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:44.760 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.760 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.760 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.760 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.018 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:45.018 12:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:45.585 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.585 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.585 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.585 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.586 12:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.844 00:17:45.844 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.844 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.844 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.102 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.102 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.102 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.102 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.102 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.102 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.102 { 00:17:46.102 "cntlid": 103, 00:17:46.102 "qid": 0, 00:17:46.102 "state": "enabled", 00:17:46.102 "thread": "nvmf_tgt_poll_group_000", 00:17:46.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:46.102 "listen_address": { 00:17:46.102 "trtype": "TCP", 00:17:46.102 "adrfam": "IPv4", 00:17:46.102 "traddr": "10.0.0.2", 00:17:46.102 "trsvcid": "4420" 00:17:46.102 }, 00:17:46.102 "peer_address": { 00:17:46.102 "trtype": "TCP", 00:17:46.102 "adrfam": "IPv4", 00:17:46.102 "traddr": "10.0.0.1", 00:17:46.102 "trsvcid": "45838" 00:17:46.102 }, 00:17:46.102 "auth": { 00:17:46.102 "state": "completed", 00:17:46.102 "digest": "sha512", 00:17:46.102 "dhgroup": "null" 00:17:46.102 } 00:17:46.102 } 00:17:46.102 ]' 00:17:46.102 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.103 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.103 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.361 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:46.361 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.361 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.361 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.361 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.619 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:46.619 12:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.187 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.445 00:17:47.445 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.445 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.445 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.704 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.704 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.704 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.704 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.704 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.704 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.704 { 00:17:47.704 "cntlid": 105, 00:17:47.704 "qid": 0, 00:17:47.704 "state": "enabled", 00:17:47.704 "thread": "nvmf_tgt_poll_group_000", 00:17:47.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:47.704 "listen_address": { 00:17:47.704 "trtype": "TCP", 00:17:47.704 "adrfam": "IPv4", 00:17:47.704 "traddr": "10.0.0.2", 00:17:47.704 "trsvcid": "4420" 00:17:47.704 }, 00:17:47.704 "peer_address": { 00:17:47.704 "trtype": "TCP", 00:17:47.704 "adrfam": "IPv4", 00:17:47.704 "traddr": "10.0.0.1", 00:17:47.704 "trsvcid": "55100" 00:17:47.704 }, 00:17:47.704 "auth": { 00:17:47.704 "state": "completed", 00:17:47.704 "digest": "sha512", 00:17:47.704 "dhgroup": "ffdhe2048" 00:17:47.704 } 00:17:47.704 } 00:17:47.704 ]' 00:17:47.704 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.704 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.704 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.964 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.964 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.964 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.964 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.964 12:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.222 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:48.222 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.790 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.049 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.049 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.049 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.050 12:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.050 00:17:49.308 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.309 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.309 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.309 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.309 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.309 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.309 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.309 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.309 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.309 { 00:17:49.309 "cntlid": 107, 00:17:49.309 "qid": 0, 00:17:49.309 "state": "enabled", 00:17:49.309 "thread": "nvmf_tgt_poll_group_000", 00:17:49.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:49.309 "listen_address": { 00:17:49.309 "trtype": "TCP", 00:17:49.309 "adrfam": "IPv4", 00:17:49.309 "traddr": "10.0.0.2", 00:17:49.309 "trsvcid": "4420" 00:17:49.309 }, 00:17:49.309 "peer_address": { 00:17:49.309 "trtype": "TCP", 00:17:49.309 "adrfam": "IPv4", 00:17:49.309 "traddr": "10.0.0.1", 00:17:49.309 "trsvcid": "55124" 00:17:49.309 }, 00:17:49.309 "auth": { 00:17:49.309 "state": "completed", 00:17:49.309 "digest": "sha512", 00:17:49.309 "dhgroup": "ffdhe2048" 00:17:49.309 } 00:17:49.309 } 00:17:49.309 ]' 00:17:49.309 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.567 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.567 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.567 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:49.567 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.567 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.567 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.567 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.826 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:49.826 12:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.393 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.652 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.652 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.652 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.652 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.652 00:17:50.652 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.652 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.652 12:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.911 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.911 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.911 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.911 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.911 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.911 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.911 { 00:17:50.911 "cntlid": 109, 00:17:50.911 "qid": 0, 00:17:50.911 "state": "enabled", 00:17:50.911 "thread": "nvmf_tgt_poll_group_000", 00:17:50.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:50.911 "listen_address": { 00:17:50.911 "trtype": "TCP", 00:17:50.911 "adrfam": "IPv4", 00:17:50.911 "traddr": "10.0.0.2", 00:17:50.911 "trsvcid": "4420" 00:17:50.911 }, 00:17:50.911 "peer_address": { 00:17:50.911 "trtype": "TCP", 00:17:50.911 "adrfam": "IPv4", 00:17:50.911 "traddr": "10.0.0.1", 00:17:50.911 "trsvcid": "55166" 00:17:50.911 }, 00:17:50.911 "auth": { 00:17:50.911 "state": "completed", 00:17:50.911 "digest": "sha512", 00:17:50.911 "dhgroup": "ffdhe2048" 00:17:50.911 } 00:17:50.911 } 00:17:50.911 ]' 00:17:50.911 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.911 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.911 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.170 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.170 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.170 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.170 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.170 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.428 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:51.428 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:52.061 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.061 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.061 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.061 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.061 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.061 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.061 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:52.061 12:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.061 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.343 00:17:52.343 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.343 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.343 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.601 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.601 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.601 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.601 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.601 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.601 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.601 { 00:17:52.602 "cntlid": 111, 00:17:52.602 "qid": 0, 00:17:52.602 "state": "enabled", 00:17:52.602 "thread": "nvmf_tgt_poll_group_000", 00:17:52.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.602 "listen_address": { 00:17:52.602 "trtype": "TCP", 00:17:52.602 "adrfam": "IPv4", 00:17:52.602 "traddr": "10.0.0.2", 00:17:52.602 "trsvcid": "4420" 00:17:52.602 }, 00:17:52.602 "peer_address": { 00:17:52.602 "trtype": "TCP", 00:17:52.602 "adrfam": "IPv4", 00:17:52.602 "traddr": "10.0.0.1", 00:17:52.602 "trsvcid": "55180" 00:17:52.602 }, 00:17:52.602 "auth": { 00:17:52.602 "state": "completed", 00:17:52.602 "digest": "sha512", 00:17:52.602 "dhgroup": "ffdhe2048" 00:17:52.602 } 00:17:52.602 } 00:17:52.602 ]' 00:17:52.602 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.602 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.602 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.602 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.602 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.602 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.602 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.602 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.860 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:52.860 12:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:53.427 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.427 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.427 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.427 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.427 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.427 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.427 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.427 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.427 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.686 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.945 00:17:53.945 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.945 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.945 12:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.203 { 00:17:54.203 "cntlid": 113, 00:17:54.203 "qid": 0, 00:17:54.203 "state": "enabled", 00:17:54.203 "thread": "nvmf_tgt_poll_group_000", 00:17:54.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:54.203 "listen_address": { 00:17:54.203 "trtype": "TCP", 00:17:54.203 "adrfam": "IPv4", 00:17:54.203 "traddr": "10.0.0.2", 00:17:54.203 "trsvcid": "4420" 00:17:54.203 }, 00:17:54.203 "peer_address": { 00:17:54.203 "trtype": "TCP", 00:17:54.203 "adrfam": "IPv4", 00:17:54.203 "traddr": "10.0.0.1", 00:17:54.203 "trsvcid": "55200" 00:17:54.203 }, 00:17:54.203 "auth": { 00:17:54.203 "state": "completed", 00:17:54.203 "digest": "sha512", 00:17:54.203 "dhgroup": "ffdhe3072" 00:17:54.203 } 00:17:54.203 } 00:17:54.203 ]' 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.203 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.462 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:54.462 12:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:17:55.029 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.029 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.029 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.029 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.029 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.029 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.029 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.029 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.288 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.547 00:17:55.547 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.547 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.547 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.805 { 00:17:55.805 "cntlid": 115, 00:17:55.805 "qid": 0, 00:17:55.805 "state": "enabled", 00:17:55.805 "thread": "nvmf_tgt_poll_group_000", 00:17:55.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:55.805 "listen_address": { 00:17:55.805 "trtype": "TCP", 00:17:55.805 "adrfam": "IPv4", 00:17:55.805 "traddr": "10.0.0.2", 00:17:55.805 "trsvcid": "4420" 00:17:55.805 }, 00:17:55.805 "peer_address": { 00:17:55.805 "trtype": "TCP", 00:17:55.805 "adrfam": "IPv4", 00:17:55.805 "traddr": "10.0.0.1", 00:17:55.805 "trsvcid": "55222" 00:17:55.805 }, 00:17:55.805 "auth": { 00:17:55.805 "state": "completed", 00:17:55.805 "digest": "sha512", 00:17:55.805 "dhgroup": "ffdhe3072" 00:17:55.805 } 00:17:55.805 } 00:17:55.805 ]' 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.805 12:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.064 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:56.064 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:17:56.631 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.631 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.631 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.631 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.631 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.631 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.631 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.631 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.889 12:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.147 00:17:57.147 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.147 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.147 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.405 { 00:17:57.405 "cntlid": 117, 00:17:57.405 "qid": 0, 00:17:57.405 "state": "enabled", 00:17:57.405 "thread": "nvmf_tgt_poll_group_000", 00:17:57.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:57.405 "listen_address": { 00:17:57.405 "trtype": "TCP", 00:17:57.405 "adrfam": "IPv4", 00:17:57.405 "traddr": "10.0.0.2", 00:17:57.405 "trsvcid": "4420" 00:17:57.405 }, 00:17:57.405 "peer_address": { 00:17:57.405 "trtype": "TCP", 00:17:57.405 "adrfam": "IPv4", 00:17:57.405 "traddr": "10.0.0.1", 00:17:57.405 "trsvcid": "34176" 00:17:57.405 }, 00:17:57.405 "auth": { 00:17:57.405 "state": "completed", 00:17:57.405 "digest": "sha512", 00:17:57.405 "dhgroup": "ffdhe3072" 00:17:57.405 } 00:17:57.405 } 00:17:57.405 ]' 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.405 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.663 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:57.663 12:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:17:58.230 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.230 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.230 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.230 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.230 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.230 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.230 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.230 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.488 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.746 00:17:58.746 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.746 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.746 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.005 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.005 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.005 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.005 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.005 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.005 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.005 { 00:17:59.005 "cntlid": 119, 00:17:59.005 "qid": 0, 00:17:59.005 "state": "enabled", 00:17:59.005 "thread": "nvmf_tgt_poll_group_000", 00:17:59.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:59.005 "listen_address": { 00:17:59.005 "trtype": "TCP", 00:17:59.005 "adrfam": "IPv4", 00:17:59.005 "traddr": "10.0.0.2", 00:17:59.005 "trsvcid": "4420" 00:17:59.005 }, 00:17:59.005 "peer_address": { 00:17:59.005 "trtype": "TCP", 00:17:59.005 "adrfam": "IPv4", 00:17:59.005 "traddr": "10.0.0.1", 00:17:59.005 "trsvcid": "34216" 00:17:59.005 }, 00:17:59.005 "auth": { 00:17:59.005 "state": "completed", 00:17:59.005 "digest": "sha512", 00:17:59.005 "dhgroup": "ffdhe3072" 00:17:59.005 } 00:17:59.005 } 00:17:59.005 ]' 00:17:59.005 12:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.005 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.005 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.005 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.005 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.005 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.005 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.005 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.263 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:59.263 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:17:59.829 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.829 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.829 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.829 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.829 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.829 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.829 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.829 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.829 12:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.088 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.347 00:18:00.347 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.347 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.347 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.605 { 00:18:00.605 "cntlid": 121, 00:18:00.605 "qid": 0, 00:18:00.605 "state": "enabled", 00:18:00.605 "thread": "nvmf_tgt_poll_group_000", 00:18:00.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:00.605 "listen_address": { 00:18:00.605 "trtype": "TCP", 00:18:00.605 "adrfam": "IPv4", 00:18:00.605 "traddr": "10.0.0.2", 00:18:00.605 "trsvcid": "4420" 00:18:00.605 }, 00:18:00.605 "peer_address": { 00:18:00.605 "trtype": "TCP", 00:18:00.605 "adrfam": "IPv4", 00:18:00.605 "traddr": "10.0.0.1", 00:18:00.605 "trsvcid": "34242" 00:18:00.605 }, 00:18:00.605 "auth": { 00:18:00.605 "state": "completed", 00:18:00.605 "digest": "sha512", 00:18:00.605 "dhgroup": "ffdhe4096" 00:18:00.605 } 00:18:00.605 } 00:18:00.605 ]' 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.605 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.863 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:18:00.863 12:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:18:01.430 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.430 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.430 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.430 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.430 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.430 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.430 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.430 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.688 12:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.946 00:18:01.946 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.946 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.946 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.205 { 00:18:02.205 "cntlid": 123, 00:18:02.205 "qid": 0, 00:18:02.205 "state": "enabled", 00:18:02.205 "thread": "nvmf_tgt_poll_group_000", 00:18:02.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:02.205 "listen_address": { 00:18:02.205 "trtype": "TCP", 00:18:02.205 "adrfam": "IPv4", 00:18:02.205 "traddr": "10.0.0.2", 00:18:02.205 "trsvcid": "4420" 00:18:02.205 }, 00:18:02.205 "peer_address": { 00:18:02.205 "trtype": "TCP", 00:18:02.205 "adrfam": "IPv4", 00:18:02.205 "traddr": "10.0.0.1", 00:18:02.205 "trsvcid": "34274" 00:18:02.205 }, 00:18:02.205 "auth": { 00:18:02.205 "state": "completed", 00:18:02.205 "digest": "sha512", 00:18:02.205 "dhgroup": "ffdhe4096" 00:18:02.205 } 00:18:02.205 } 00:18:02.205 ]' 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.205 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.463 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:18:02.463 12:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:18:03.029 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.029 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.029 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.029 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.029 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.029 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.029 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.029 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.287 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.545 00:18:03.545 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.545 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.545 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.804 { 00:18:03.804 "cntlid": 125, 00:18:03.804 "qid": 0, 00:18:03.804 "state": "enabled", 00:18:03.804 "thread": "nvmf_tgt_poll_group_000", 00:18:03.804 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:03.804 "listen_address": { 00:18:03.804 "trtype": "TCP", 00:18:03.804 "adrfam": "IPv4", 00:18:03.804 "traddr": "10.0.0.2", 00:18:03.804 "trsvcid": "4420" 00:18:03.804 }, 00:18:03.804 "peer_address": { 00:18:03.804 "trtype": "TCP", 00:18:03.804 "adrfam": "IPv4", 00:18:03.804 "traddr": "10.0.0.1", 00:18:03.804 "trsvcid": "34296" 00:18:03.804 }, 00:18:03.804 "auth": { 00:18:03.804 "state": "completed", 00:18:03.804 "digest": "sha512", 00:18:03.804 "dhgroup": "ffdhe4096" 00:18:03.804 } 00:18:03.804 } 00:18:03.804 ]' 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.804 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.062 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.062 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.062 12:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.062 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:18:04.062 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:18:04.627 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.627 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.627 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.627 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.627 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.627 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.627 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.627 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:04.885 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.886 12:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.143 00:18:05.143 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.143 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.143 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.402 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.402 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.402 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.402 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.402 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.402 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.402 { 00:18:05.402 "cntlid": 127, 00:18:05.402 "qid": 0, 00:18:05.402 "state": "enabled", 00:18:05.402 "thread": "nvmf_tgt_poll_group_000", 00:18:05.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:05.402 "listen_address": { 00:18:05.402 "trtype": "TCP", 00:18:05.402 "adrfam": "IPv4", 00:18:05.402 "traddr": "10.0.0.2", 00:18:05.402 "trsvcid": "4420" 00:18:05.402 }, 00:18:05.402 "peer_address": { 00:18:05.402 "trtype": "TCP", 00:18:05.402 "adrfam": "IPv4", 00:18:05.402 "traddr": "10.0.0.1", 00:18:05.402 "trsvcid": "34318" 00:18:05.402 }, 00:18:05.402 "auth": { 00:18:05.402 "state": "completed", 00:18:05.402 "digest": "sha512", 00:18:05.402 "dhgroup": "ffdhe4096" 00:18:05.402 } 00:18:05.402 } 00:18:05.402 ]' 00:18:05.402 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.402 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.402 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.660 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.660 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.660 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.660 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.660 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.918 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:18:05.918 12:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.484 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.050 00:18:07.050 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.050 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.050 12:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.050 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.050 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.050 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.050 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.050 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.050 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.050 { 00:18:07.050 "cntlid": 129, 00:18:07.050 "qid": 0, 00:18:07.050 "state": "enabled", 00:18:07.050 "thread": "nvmf_tgt_poll_group_000", 00:18:07.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:07.050 "listen_address": { 00:18:07.050 "trtype": "TCP", 00:18:07.050 "adrfam": "IPv4", 00:18:07.050 "traddr": "10.0.0.2", 00:18:07.050 "trsvcid": "4420" 00:18:07.050 }, 00:18:07.050 "peer_address": { 00:18:07.050 "trtype": "TCP", 00:18:07.050 "adrfam": "IPv4", 00:18:07.050 "traddr": "10.0.0.1", 00:18:07.050 "trsvcid": "39928" 00:18:07.050 }, 00:18:07.050 "auth": { 00:18:07.050 "state": "completed", 00:18:07.050 "digest": "sha512", 00:18:07.050 "dhgroup": "ffdhe6144" 00:18:07.050 } 00:18:07.050 } 00:18:07.050 ]' 00:18:07.050 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.308 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.308 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.308 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.308 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.308 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.308 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.308 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.566 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:18:07.566 12:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:18:08.132 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.132 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.132 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.132 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.132 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.133 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.133 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.133 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.391 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.649 00:18:08.649 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.649 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.649 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.907 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.907 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.907 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.907 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.907 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.907 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.907 { 00:18:08.907 "cntlid": 131, 00:18:08.907 "qid": 0, 00:18:08.907 "state": "enabled", 00:18:08.907 "thread": "nvmf_tgt_poll_group_000", 00:18:08.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:08.907 "listen_address": { 00:18:08.907 "trtype": "TCP", 00:18:08.907 "adrfam": "IPv4", 00:18:08.907 "traddr": "10.0.0.2", 00:18:08.907 "trsvcid": "4420" 00:18:08.907 }, 00:18:08.907 "peer_address": { 00:18:08.907 "trtype": "TCP", 00:18:08.907 "adrfam": "IPv4", 00:18:08.907 "traddr": "10.0.0.1", 00:18:08.907 "trsvcid": "39956" 00:18:08.907 }, 00:18:08.907 "auth": { 00:18:08.907 "state": "completed", 00:18:08.907 "digest": "sha512", 00:18:08.907 "dhgroup": "ffdhe6144" 00:18:08.907 } 00:18:08.907 } 00:18:08.907 ]' 00:18:08.907 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.907 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.907 12:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.907 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.907 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.907 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.907 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.907 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.165 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:18:09.165 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:18:09.731 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.731 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.731 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.731 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.731 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.731 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.731 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.731 12:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.989 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.247 00:18:10.506 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.506 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.506 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.506 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.506 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.506 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.506 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.506 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.506 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.506 { 00:18:10.506 "cntlid": 133, 00:18:10.506 "qid": 0, 00:18:10.506 "state": "enabled", 00:18:10.506 "thread": "nvmf_tgt_poll_group_000", 00:18:10.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:10.506 "listen_address": { 00:18:10.506 "trtype": "TCP", 00:18:10.506 "adrfam": "IPv4", 00:18:10.506 "traddr": "10.0.0.2", 00:18:10.506 "trsvcid": "4420" 00:18:10.506 }, 00:18:10.506 "peer_address": { 00:18:10.506 "trtype": "TCP", 00:18:10.506 "adrfam": "IPv4", 00:18:10.506 "traddr": "10.0.0.1", 00:18:10.506 "trsvcid": "39972" 00:18:10.506 }, 00:18:10.506 "auth": { 00:18:10.506 "state": "completed", 00:18:10.506 "digest": "sha512", 00:18:10.506 "dhgroup": "ffdhe6144" 00:18:10.506 } 00:18:10.506 } 00:18:10.506 ]' 00:18:10.506 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.764 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.764 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.764 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.764 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.764 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.764 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.764 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.023 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:18:11.023 12:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.589 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.847 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.847 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.847 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.847 12:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.105 00:18:12.105 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.105 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.105 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.363 { 00:18:12.363 "cntlid": 135, 00:18:12.363 "qid": 0, 00:18:12.363 "state": "enabled", 00:18:12.363 "thread": "nvmf_tgt_poll_group_000", 00:18:12.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:12.363 "listen_address": { 00:18:12.363 "trtype": "TCP", 00:18:12.363 "adrfam": "IPv4", 00:18:12.363 "traddr": "10.0.0.2", 00:18:12.363 "trsvcid": "4420" 00:18:12.363 }, 00:18:12.363 "peer_address": { 00:18:12.363 "trtype": "TCP", 00:18:12.363 "adrfam": "IPv4", 00:18:12.363 "traddr": "10.0.0.1", 00:18:12.363 "trsvcid": "40010" 00:18:12.363 }, 00:18:12.363 "auth": { 00:18:12.363 "state": "completed", 00:18:12.363 "digest": "sha512", 00:18:12.363 "dhgroup": "ffdhe6144" 00:18:12.363 } 00:18:12.363 } 00:18:12.363 ]' 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.363 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.621 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:18:12.621 12:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:18:13.188 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.188 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.188 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.188 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.188 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.188 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:13.188 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.188 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.188 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.446 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.013 00:18:14.013 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.013 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.013 12:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.013 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.013 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.013 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.013 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.272 { 00:18:14.272 "cntlid": 137, 00:18:14.272 "qid": 0, 00:18:14.272 "state": "enabled", 00:18:14.272 "thread": "nvmf_tgt_poll_group_000", 00:18:14.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:14.272 "listen_address": { 00:18:14.272 "trtype": "TCP", 00:18:14.272 "adrfam": "IPv4", 00:18:14.272 "traddr": "10.0.0.2", 00:18:14.272 "trsvcid": "4420" 00:18:14.272 }, 00:18:14.272 "peer_address": { 00:18:14.272 "trtype": "TCP", 00:18:14.272 "adrfam": "IPv4", 00:18:14.272 "traddr": "10.0.0.1", 00:18:14.272 "trsvcid": "40028" 00:18:14.272 }, 00:18:14.272 "auth": { 00:18:14.272 "state": "completed", 00:18:14.272 "digest": "sha512", 00:18:14.272 "dhgroup": "ffdhe8192" 00:18:14.272 } 00:18:14.272 } 00:18:14.272 ]' 00:18:14.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.272 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.530 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:18:14.530 12:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:18:15.096 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.096 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.096 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.096 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.096 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.096 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.096 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.096 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.354 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.355 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.612 00:18:15.871 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.871 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.871 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.871 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.871 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.871 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.871 12:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.871 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.871 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.871 { 00:18:15.871 "cntlid": 139, 00:18:15.871 "qid": 0, 00:18:15.871 "state": "enabled", 00:18:15.871 "thread": "nvmf_tgt_poll_group_000", 00:18:15.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:15.871 "listen_address": { 00:18:15.871 "trtype": "TCP", 00:18:15.871 "adrfam": "IPv4", 00:18:15.871 "traddr": "10.0.0.2", 00:18:15.871 "trsvcid": "4420" 00:18:15.871 }, 00:18:15.871 "peer_address": { 00:18:15.871 "trtype": "TCP", 00:18:15.871 "adrfam": "IPv4", 00:18:15.871 "traddr": "10.0.0.1", 00:18:15.871 "trsvcid": "40044" 00:18:15.871 }, 00:18:15.871 "auth": { 00:18:15.871 "state": "completed", 00:18:15.871 "digest": "sha512", 00:18:15.871 "dhgroup": "ffdhe8192" 00:18:15.871 } 00:18:15.871 } 00:18:15.871 ]' 00:18:15.871 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.129 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.129 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.129 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.129 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.129 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.129 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.129 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.387 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:18:16.387 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: --dhchap-ctrl-secret DHHC-1:02:MjExZTEwYWQyMWY4MDE4OThjNzBkYjlkMDM4OTg5ODQ3YmQ5ZmFkNzA3MGU5YzUy5VWPWA==: 00:18:16.953 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.953 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.953 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.953 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.953 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.953 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.953 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.953 12:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.953 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.212 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.470 00:18:17.728 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.728 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.728 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.728 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.728 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.728 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.728 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.728 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.728 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.728 { 00:18:17.728 "cntlid": 141, 00:18:17.728 "qid": 0, 00:18:17.728 "state": "enabled", 00:18:17.728 "thread": "nvmf_tgt_poll_group_000", 00:18:17.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:17.728 "listen_address": { 00:18:17.728 "trtype": "TCP", 00:18:17.728 "adrfam": "IPv4", 00:18:17.728 "traddr": "10.0.0.2", 00:18:17.728 "trsvcid": "4420" 00:18:17.728 }, 00:18:17.728 "peer_address": { 00:18:17.728 "trtype": "TCP", 00:18:17.728 "adrfam": "IPv4", 00:18:17.728 "traddr": "10.0.0.1", 00:18:17.728 "trsvcid": "44012" 00:18:17.728 }, 00:18:17.728 "auth": { 00:18:17.728 "state": "completed", 00:18:17.728 "digest": "sha512", 00:18:17.728 "dhgroup": "ffdhe8192" 00:18:17.728 } 00:18:17.728 } 00:18:17.728 ]' 00:18:17.728 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.986 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.986 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.986 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.986 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.986 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.986 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.986 12:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.243 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:18:18.243 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:01:MDE3NjM5MmVkYmNjMDBjY2MxYTIxMDZiMzgxNGJjNDIjXq3i: 00:18:18.810 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.810 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.810 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.810 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.810 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.810 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.810 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:18.810 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.068 12:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.327 00:18:19.584 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.584 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.584 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.584 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.584 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.584 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.584 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.584 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.584 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.584 { 00:18:19.584 "cntlid": 143, 00:18:19.584 "qid": 0, 00:18:19.584 "state": "enabled", 00:18:19.584 "thread": "nvmf_tgt_poll_group_000", 00:18:19.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:19.584 "listen_address": { 00:18:19.584 "trtype": "TCP", 00:18:19.585 "adrfam": "IPv4", 00:18:19.585 "traddr": "10.0.0.2", 00:18:19.585 "trsvcid": "4420" 00:18:19.585 }, 00:18:19.585 "peer_address": { 00:18:19.585 "trtype": "TCP", 00:18:19.585 "adrfam": "IPv4", 00:18:19.585 "traddr": "10.0.0.1", 00:18:19.585 "trsvcid": "44026" 00:18:19.585 }, 00:18:19.585 "auth": { 00:18:19.585 "state": "completed", 00:18:19.585 "digest": "sha512", 00:18:19.585 "dhgroup": "ffdhe8192" 00:18:19.585 } 00:18:19.585 } 00:18:19.585 ]' 00:18:19.585 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.842 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.842 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.842 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.842 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.842 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.842 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.843 12:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.100 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:18:20.100 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:18:20.666 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.666 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.666 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.666 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.666 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.666 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.667 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.925 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.925 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.925 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.925 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.925 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.925 12:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.182 00:18:21.182 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.182 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.182 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.441 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.441 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.441 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.441 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.441 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.441 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.441 { 00:18:21.441 "cntlid": 145, 00:18:21.441 "qid": 0, 00:18:21.441 "state": "enabled", 00:18:21.441 "thread": "nvmf_tgt_poll_group_000", 00:18:21.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:21.441 "listen_address": { 00:18:21.441 "trtype": "TCP", 00:18:21.441 "adrfam": "IPv4", 00:18:21.441 "traddr": "10.0.0.2", 00:18:21.441 "trsvcid": "4420" 00:18:21.441 }, 00:18:21.441 "peer_address": { 00:18:21.441 "trtype": "TCP", 00:18:21.441 "adrfam": "IPv4", 00:18:21.441 "traddr": "10.0.0.1", 00:18:21.441 "trsvcid": "44052" 00:18:21.441 }, 00:18:21.441 "auth": { 00:18:21.441 "state": "completed", 00:18:21.441 "digest": "sha512", 00:18:21.441 "dhgroup": "ffdhe8192" 00:18:21.441 } 00:18:21.441 } 00:18:21.441 ]' 00:18:21.441 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.441 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.441 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:18:21.699 12:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MWU5NjYxYjNkYWZhMzkzMTE3N2MwMGUwYzQ4YTAwODg1YWIxYWM0OTg5Yjg3Zjg2pP4/Hw==: --dhchap-ctrl-secret DHHC-1:03:OWRmMWQ0N2EzNzdjZGI5NWJmOGE0ZDcxOGZmNTRiMTU2YTE3YTIxYmM1ZTM1OTQwY2Q3ZTQxMjhiOTUzY2Q3MFKDJXg=: 00:18:22.266 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.266 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.266 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.266 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:22.525 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:22.784 request: 00:18:22.784 { 00:18:22.784 "name": "nvme0", 00:18:22.784 "trtype": "tcp", 00:18:22.784 "traddr": "10.0.0.2", 00:18:22.784 "adrfam": "ipv4", 00:18:22.784 "trsvcid": "4420", 00:18:22.784 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:22.784 "prchk_reftag": false, 00:18:22.784 "prchk_guard": false, 00:18:22.784 "hdgst": false, 00:18:22.784 "ddgst": false, 00:18:22.784 "dhchap_key": "key2", 00:18:22.784 "allow_unrecognized_csi": false, 00:18:22.784 "method": "bdev_nvme_attach_controller", 00:18:22.784 "req_id": 1 00:18:22.784 } 00:18:22.784 Got JSON-RPC error response 00:18:22.784 response: 00:18:22.784 { 00:18:22.784 "code": -5, 00:18:22.784 "message": "Input/output error" 00:18:22.784 } 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:22.784 12:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:23.351 request: 00:18:23.351 { 00:18:23.351 "name": "nvme0", 00:18:23.351 "trtype": "tcp", 00:18:23.351 "traddr": "10.0.0.2", 00:18:23.351 "adrfam": "ipv4", 00:18:23.351 "trsvcid": "4420", 00:18:23.351 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:23.351 "prchk_reftag": false, 00:18:23.351 "prchk_guard": false, 00:18:23.351 "hdgst": false, 00:18:23.351 "ddgst": false, 00:18:23.351 "dhchap_key": "key1", 00:18:23.351 "dhchap_ctrlr_key": "ckey2", 00:18:23.351 "allow_unrecognized_csi": false, 00:18:23.351 "method": "bdev_nvme_attach_controller", 00:18:23.351 "req_id": 1 00:18:23.351 } 00:18:23.351 Got JSON-RPC error response 00:18:23.351 response: 00:18:23.351 { 00:18:23.351 "code": -5, 00:18:23.351 "message": "Input/output error" 00:18:23.351 } 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.351 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.918 request: 00:18:23.918 { 00:18:23.918 "name": "nvme0", 00:18:23.918 "trtype": "tcp", 00:18:23.918 "traddr": "10.0.0.2", 00:18:23.918 "adrfam": "ipv4", 00:18:23.918 "trsvcid": "4420", 00:18:23.918 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:23.918 "prchk_reftag": false, 00:18:23.918 "prchk_guard": false, 00:18:23.918 "hdgst": false, 00:18:23.918 "ddgst": false, 00:18:23.918 "dhchap_key": "key1", 00:18:23.918 "dhchap_ctrlr_key": "ckey1", 00:18:23.918 "allow_unrecognized_csi": false, 00:18:23.918 "method": "bdev_nvme_attach_controller", 00:18:23.918 "req_id": 1 00:18:23.918 } 00:18:23.918 Got JSON-RPC error response 00:18:23.918 response: 00:18:23.918 { 00:18:23.918 "code": -5, 00:18:23.918 "message": "Input/output error" 00:18:23.918 } 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1610245 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1610245 ']' 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1610245 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1610245 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1610245' 00:18:23.918 killing process with pid 1610245 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1610245 00:18:23.918 12:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1610245 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1631964 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1631964 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1631964 ']' 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:24.177 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1631964 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1631964 ']' 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.437 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.696 null0 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4q9 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.A4M ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.A4M 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.c86 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Bt6 ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Bt6 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.aAs 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.9ny ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9ny 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gOG 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.696 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:24.697 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.697 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.697 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.697 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.697 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.697 12:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.630 nvme0n1 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.630 { 00:18:25.630 "cntlid": 1, 00:18:25.630 "qid": 0, 00:18:25.630 "state": "enabled", 00:18:25.630 "thread": "nvmf_tgt_poll_group_000", 00:18:25.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:25.630 "listen_address": { 00:18:25.630 "trtype": "TCP", 00:18:25.630 "adrfam": "IPv4", 00:18:25.630 "traddr": "10.0.0.2", 00:18:25.630 "trsvcid": "4420" 00:18:25.630 }, 00:18:25.630 "peer_address": { 00:18:25.630 "trtype": "TCP", 00:18:25.630 "adrfam": "IPv4", 00:18:25.630 "traddr": "10.0.0.1", 00:18:25.630 "trsvcid": "44086" 00:18:25.630 }, 00:18:25.630 "auth": { 00:18:25.630 "state": "completed", 00:18:25.630 "digest": "sha512", 00:18:25.630 "dhgroup": "ffdhe8192" 00:18:25.630 } 00:18:25.630 } 00:18:25.630 ]' 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.630 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.887 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.887 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.887 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.887 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.887 12:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.145 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:18:26.145 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:18:26.711 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.712 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:26.712 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.712 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.712 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.712 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:26.712 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.712 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.712 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.712 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:26.712 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:26.970 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:26.970 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:26.970 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:26.970 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:26.970 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.970 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:26.970 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.970 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.970 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.970 12:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.970 request: 00:18:26.970 { 00:18:26.970 "name": "nvme0", 00:18:26.970 "trtype": "tcp", 00:18:26.970 "traddr": "10.0.0.2", 00:18:26.970 "adrfam": "ipv4", 00:18:26.970 "trsvcid": "4420", 00:18:26.970 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:26.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:26.970 "prchk_reftag": false, 00:18:26.970 "prchk_guard": false, 00:18:26.970 "hdgst": false, 00:18:26.970 "ddgst": false, 00:18:26.970 "dhchap_key": "key3", 00:18:26.970 "allow_unrecognized_csi": false, 00:18:26.970 "method": "bdev_nvme_attach_controller", 00:18:26.970 "req_id": 1 00:18:26.970 } 00:18:26.970 Got JSON-RPC error response 00:18:26.970 response: 00:18:26.970 { 00:18:26.970 "code": -5, 00:18:26.970 "message": "Input/output error" 00:18:26.970 } 00:18:26.970 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:26.970 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.970 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.970 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.970 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:26.970 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:26.970 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:26.970 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:27.229 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:27.229 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:27.229 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:27.229 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:27.229 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.229 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:27.229 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.229 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.229 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.229 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.487 request: 00:18:27.487 { 00:18:27.487 "name": "nvme0", 00:18:27.487 "trtype": "tcp", 00:18:27.487 "traddr": "10.0.0.2", 00:18:27.487 "adrfam": "ipv4", 00:18:27.487 "trsvcid": "4420", 00:18:27.487 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:27.487 "prchk_reftag": false, 00:18:27.487 "prchk_guard": false, 00:18:27.487 "hdgst": false, 00:18:27.487 "ddgst": false, 00:18:27.487 "dhchap_key": "key3", 00:18:27.487 "allow_unrecognized_csi": false, 00:18:27.487 "method": "bdev_nvme_attach_controller", 00:18:27.487 "req_id": 1 00:18:27.487 } 00:18:27.487 Got JSON-RPC error response 00:18:27.487 response: 00:18:27.487 { 00:18:27.487 "code": -5, 00:18:27.487 "message": "Input/output error" 00:18:27.487 } 00:18:27.487 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:27.487 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.487 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.487 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.487 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:27.487 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:27.487 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:27.487 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.487 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.487 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:27.746 12:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.004 request: 00:18:28.004 { 00:18:28.004 "name": "nvme0", 00:18:28.004 "trtype": "tcp", 00:18:28.004 "traddr": "10.0.0.2", 00:18:28.004 "adrfam": "ipv4", 00:18:28.004 "trsvcid": "4420", 00:18:28.004 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:28.004 "prchk_reftag": false, 00:18:28.004 "prchk_guard": false, 00:18:28.004 "hdgst": false, 00:18:28.004 "ddgst": false, 00:18:28.004 "dhchap_key": "key0", 00:18:28.004 "dhchap_ctrlr_key": "key1", 00:18:28.004 "allow_unrecognized_csi": false, 00:18:28.004 "method": "bdev_nvme_attach_controller", 00:18:28.004 "req_id": 1 00:18:28.004 } 00:18:28.004 Got JSON-RPC error response 00:18:28.004 response: 00:18:28.004 { 00:18:28.004 "code": -5, 00:18:28.004 "message": "Input/output error" 00:18:28.004 } 00:18:28.004 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:28.004 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:28.004 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:28.004 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:28.004 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:28.004 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:28.004 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:28.263 nvme0n1 00:18:28.263 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:28.263 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:28.263 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.521 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.521 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.521 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.778 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:28.778 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.778 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.778 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.778 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:28.778 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:28.779 12:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:29.407 nvme0n1 00:18:29.407 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:29.407 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:29.407 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.687 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.687 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:29.687 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.687 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.687 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.687 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:29.687 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:29.687 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.971 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.971 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:18:29.971 12:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: --dhchap-ctrl-secret DHHC-1:03:NTlkMzU1Y2JlYzMyZjUyOWYwNWZmYjFmNWE0ODA3YmUyM2UwZTBkYTA4ZmE1NzVkNjY5YzdlNzlmNjQ5NTVjNUdoyiE=: 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:30.543 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:30.800 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.801 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:30.801 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:30.801 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:30.801 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:30.801 12:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:31.059 request: 00:18:31.059 { 00:18:31.059 "name": "nvme0", 00:18:31.059 "trtype": "tcp", 00:18:31.059 "traddr": "10.0.0.2", 00:18:31.059 "adrfam": "ipv4", 00:18:31.059 "trsvcid": "4420", 00:18:31.059 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:31.059 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:31.059 "prchk_reftag": false, 00:18:31.059 "prchk_guard": false, 00:18:31.059 "hdgst": false, 00:18:31.059 "ddgst": false, 00:18:31.059 "dhchap_key": "key1", 00:18:31.059 "allow_unrecognized_csi": false, 00:18:31.059 "method": "bdev_nvme_attach_controller", 00:18:31.059 "req_id": 1 00:18:31.059 } 00:18:31.059 Got JSON-RPC error response 00:18:31.059 response: 00:18:31.059 { 00:18:31.059 "code": -5, 00:18:31.059 "message": "Input/output error" 00:18:31.059 } 00:18:31.059 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:31.059 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:31.059 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:31.059 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:31.059 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.059 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.059 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:31.993 nvme0n1 00:18:31.993 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:31.993 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:31.993 12:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.993 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.993 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.993 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.252 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.252 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.252 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.252 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.252 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:32.252 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:32.252 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:32.510 nvme0n1 00:18:32.510 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:32.510 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:32.510 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.768 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.768 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.768 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.027 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:33.027 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.027 12:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: '' 2s 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: ]] 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:N2U1ZDg2YjMwNGE4YTM3ZDRjNjYxODgyMDY5ZDQyZDBTM50X: 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:33.027 12:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: 2s 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: ]] 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OTRmZGIzMGVmZGJkOWEyMWI4YTNhOWE4M2ZjMjAyOWJiNDZiMTYxNThkMGI4OWE0WY/MQQ==: 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:34.928 12:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:37.458 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:37.716 nvme0n1 00:18:37.974 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.974 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.974 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.974 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.974 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:37.974 12:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:38.541 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:38.541 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:38.541 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.541 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.541 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.541 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.541 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.541 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.541 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:38.541 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:38.799 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:38.799 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:38.799 12:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.057 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:39.058 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:39.058 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.058 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:39.316 request: 00:18:39.316 { 00:18:39.316 "name": "nvme0", 00:18:39.316 "dhchap_key": "key1", 00:18:39.316 "dhchap_ctrlr_key": "key3", 00:18:39.316 "method": "bdev_nvme_set_keys", 00:18:39.316 "req_id": 1 00:18:39.316 } 00:18:39.316 Got JSON-RPC error response 00:18:39.316 response: 00:18:39.316 { 00:18:39.316 "code": -13, 00:18:39.316 "message": "Permission denied" 00:18:39.316 } 00:18:39.316 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:39.316 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.316 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.316 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.575 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:39.575 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:39.575 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.575 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:39.575 12:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:40.949 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:40.949 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:40.949 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.949 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:40.950 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:40.950 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.950 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.950 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.950 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.950 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:40.950 12:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:41.515 nvme0n1 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:41.515 12:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:42.081 request: 00:18:42.081 { 00:18:42.081 "name": "nvme0", 00:18:42.081 "dhchap_key": "key2", 00:18:42.081 "dhchap_ctrlr_key": "key0", 00:18:42.081 "method": "bdev_nvme_set_keys", 00:18:42.081 "req_id": 1 00:18:42.081 } 00:18:42.081 Got JSON-RPC error response 00:18:42.081 response: 00:18:42.081 { 00:18:42.081 "code": -13, 00:18:42.081 "message": "Permission denied" 00:18:42.081 } 00:18:42.081 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:42.081 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.081 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.081 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.081 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:42.081 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:42.081 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.340 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:42.340 12:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:43.273 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:43.273 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:43.273 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1610266 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1610266 ']' 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1610266 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1610266 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1610266' 00:18:43.532 killing process with pid 1610266 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1610266 00:18:43.532 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1610266 00:18:43.790 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:43.790 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.790 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:43.790 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.790 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:43.790 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.790 12:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.790 rmmod nvme_tcp 00:18:44.049 rmmod nvme_fabrics 00:18:44.049 rmmod nvme_keyring 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1631964 ']' 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1631964 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1631964 ']' 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1631964 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1631964 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1631964' 00:18:44.049 killing process with pid 1631964 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1631964 00:18:44.049 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1631964 00:18:44.308 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.308 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.308 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.308 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:44.308 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:44.308 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.308 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.308 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.309 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:44.309 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.309 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.309 12:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.213 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:46.213 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.4q9 /tmp/spdk.key-sha256.c86 /tmp/spdk.key-sha384.aAs /tmp/spdk.key-sha512.gOG /tmp/spdk.key-sha512.A4M /tmp/spdk.key-sha384.Bt6 /tmp/spdk.key-sha256.9ny '' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf-auth.log 00:18:46.213 00:18:46.213 real 2m33.596s 00:18:46.213 user 5m54.078s 00:18:46.213 sys 0m24.476s 00:18:46.213 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:46.213 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.213 ************************************ 00:18:46.213 END TEST nvmf_auth_target 00:18:46.213 ************************************ 00:18:46.213 12:27:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:46.213 12:27:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:46.213 12:27:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:46.213 12:27:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:46.213 12:27:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:46.473 ************************************ 00:18:46.473 START TEST nvmf_bdevio_no_huge 00:18:46.473 ************************************ 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:46.473 * Looking for test storage... 00:18:46.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:46.473 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:46.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.473 --rc genhtml_branch_coverage=1 00:18:46.473 --rc genhtml_function_coverage=1 00:18:46.473 --rc genhtml_legend=1 00:18:46.473 --rc geninfo_all_blocks=1 00:18:46.473 --rc geninfo_unexecuted_blocks=1 00:18:46.473 00:18:46.473 ' 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.474 --rc genhtml_branch_coverage=1 00:18:46.474 --rc genhtml_function_coverage=1 00:18:46.474 --rc genhtml_legend=1 00:18:46.474 --rc geninfo_all_blocks=1 00:18:46.474 --rc geninfo_unexecuted_blocks=1 00:18:46.474 00:18:46.474 ' 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.474 --rc genhtml_branch_coverage=1 00:18:46.474 --rc genhtml_function_coverage=1 00:18:46.474 --rc genhtml_legend=1 00:18:46.474 --rc geninfo_all_blocks=1 00:18:46.474 --rc geninfo_unexecuted_blocks=1 00:18:46.474 00:18:46.474 ' 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:46.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:46.474 --rc genhtml_branch_coverage=1 00:18:46.474 --rc genhtml_function_coverage=1 00:18:46.474 --rc genhtml_legend=1 00:18:46.474 --rc geninfo_all_blocks=1 00:18:46.474 --rc geninfo_unexecuted_blocks=1 00:18:46.474 00:18:46.474 ' 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:46.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:46.474 12:27:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:53.051 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:53.051 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:53.051 Found net devices under 0000:86:00.0: cvl_0_0 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:53.051 Found net devices under 0000:86:00.1: cvl_0_1 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:53.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:53.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:18:53.051 00:18:53.051 --- 10.0.0.2 ping statistics --- 00:18:53.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.051 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:53.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:53.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:18:53.051 00:18:53.051 --- 10.0.0.1 ping statistics --- 00:18:53.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:53.051 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1639386 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1639386 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1639386 ']' 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.051 12:27:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.051 [2024-12-10 12:27:14.586909] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:18:53.051 [2024-12-10 12:27:14.586959] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:53.051 [2024-12-10 12:27:14.674025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.051 [2024-12-10 12:27:14.721276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.052 [2024-12-10 12:27:14.721315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.052 [2024-12-10 12:27:14.721322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.052 [2024-12-10 12:27:14.721329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.052 [2024-12-10 12:27:14.721334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.052 [2024-12-10 12:27:14.722481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:53.052 [2024-12-10 12:27:14.722506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:53.052 [2024-12-10 12:27:14.722601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.052 [2024-12-10 12:27:14.722601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:53.308 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.308 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:53.308 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:53.308 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.308 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.566 [2024-12-10 12:27:15.481300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.566 Malloc0 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:53.566 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:53.567 [2024-12-10 12:27:15.525559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:53.567 { 00:18:53.567 "params": { 00:18:53.567 "name": "Nvme$subsystem", 00:18:53.567 "trtype": "$TEST_TRANSPORT", 00:18:53.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:53.567 "adrfam": "ipv4", 00:18:53.567 "trsvcid": "$NVMF_PORT", 00:18:53.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:53.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:53.567 "hdgst": ${hdgst:-false}, 00:18:53.567 "ddgst": ${ddgst:-false} 00:18:53.567 }, 00:18:53.567 "method": "bdev_nvme_attach_controller" 00:18:53.567 } 00:18:53.567 EOF 00:18:53.567 )") 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:53.567 12:27:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:53.567 "params": { 00:18:53.567 "name": "Nvme1", 00:18:53.567 "trtype": "tcp", 00:18:53.567 "traddr": "10.0.0.2", 00:18:53.567 "adrfam": "ipv4", 00:18:53.567 "trsvcid": "4420", 00:18:53.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:53.567 "hdgst": false, 00:18:53.567 "ddgst": false 00:18:53.567 }, 00:18:53.567 "method": "bdev_nvme_attach_controller" 00:18:53.567 }' 00:18:53.567 [2024-12-10 12:27:15.576560] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:18:53.567 [2024-12-10 12:27:15.576602] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1639623 ] 00:18:53.567 [2024-12-10 12:27:15.655436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:53.567 [2024-12-10 12:27:15.704784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.567 [2024-12-10 12:27:15.704889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.567 [2024-12-10 12:27:15.704889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.133 I/O targets: 00:18:54.133 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:54.133 00:18:54.133 00:18:54.133 CUnit - A unit testing framework for C - Version 2.1-3 00:18:54.133 http://cunit.sourceforge.net/ 00:18:54.133 00:18:54.133 00:18:54.133 Suite: bdevio tests on: Nvme1n1 00:18:54.133 Test: blockdev write read block ...passed 00:18:54.133 Test: blockdev write zeroes read block ...passed 00:18:54.133 Test: blockdev write zeroes read no split ...passed 00:18:54.133 Test: blockdev write zeroes read split ...passed 00:18:54.133 Test: blockdev write zeroes read split partial ...passed 00:18:54.133 Test: blockdev reset ...[2024-12-10 12:27:16.157791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:54.133 [2024-12-10 12:27:16.157853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77c540 (9): Bad file descriptor 00:18:54.133 [2024-12-10 12:27:16.173733] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:54.133 passed 00:18:54.133 Test: blockdev write read 8 blocks ...passed 00:18:54.133 Test: blockdev write read size > 128k ...passed 00:18:54.133 Test: blockdev write read invalid size ...passed 00:18:54.133 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:54.133 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:54.133 Test: blockdev write read max offset ...passed 00:18:54.133 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:54.133 Test: blockdev writev readv 8 blocks ...passed 00:18:54.133 Test: blockdev writev readv 30 x 1block ...passed 00:18:54.412 Test: blockdev writev readv block ...passed 00:18:54.412 Test: blockdev writev readv size > 128k ...passed 00:18:54.412 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:54.412 Test: blockdev comparev and writev ...[2024-12-10 12:27:16.343017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.412 [2024-12-10 12:27:16.343045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:54.412 [2024-12-10 12:27:16.343058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.412 [2024-12-10 12:27:16.343066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:54.412 [2024-12-10 12:27:16.343323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.412 [2024-12-10 12:27:16.343334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:54.412 [2024-12-10 12:27:16.343346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.412 [2024-12-10 12:27:16.343353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:54.412 [2024-12-10 12:27:16.343611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.412 [2024-12-10 12:27:16.343621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:54.412 [2024-12-10 12:27:16.343633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.412 [2024-12-10 12:27:16.343640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:54.412 [2024-12-10 12:27:16.343868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.412 [2024-12-10 12:27:16.343878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:54.412 [2024-12-10 12:27:16.343889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:54.412 [2024-12-10 12:27:16.343895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:54.412 passed 00:18:54.412 Test: blockdev nvme passthru rw ...passed 00:18:54.412 Test: blockdev nvme passthru vendor specific ...[2024-12-10 12:27:16.426519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:54.412 [2024-12-10 12:27:16.426536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:54.412 [2024-12-10 12:27:16.426645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:54.412 [2024-12-10 12:27:16.426655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:54.412 [2024-12-10 12:27:16.426762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:54.412 [2024-12-10 12:27:16.426771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:54.412 [2024-12-10 12:27:16.426882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:54.412 [2024-12-10 12:27:16.426891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:54.412 passed 00:18:54.412 Test: blockdev nvme admin passthru ...passed 00:18:54.412 Test: blockdev copy ...passed 00:18:54.412 00:18:54.412 Run Summary: Type Total Ran Passed Failed Inactive 00:18:54.412 suites 1 1 n/a 0 0 00:18:54.412 tests 23 23 23 0 0 00:18:54.412 asserts 152 152 152 0 n/a 00:18:54.412 00:18:54.412 Elapsed time = 0.896 seconds 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:54.671 rmmod nvme_tcp 00:18:54.671 rmmod nvme_fabrics 00:18:54.671 rmmod nvme_keyring 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1639386 ']' 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1639386 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1639386 ']' 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1639386 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.671 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1639386 00:18:54.929 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:54.929 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:54.929 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1639386' 00:18:54.929 killing process with pid 1639386 00:18:54.929 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1639386 00:18:54.929 12:27:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1639386 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.188 12:27:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.093 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:57.093 00:18:57.093 real 0m10.861s 00:18:57.093 user 0m13.637s 00:18:57.093 sys 0m5.399s 00:18:57.093 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.093 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:57.093 ************************************ 00:18:57.093 END TEST nvmf_bdevio_no_huge 00:18:57.093 ************************************ 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:57.352 ************************************ 00:18:57.352 START TEST nvmf_tls 00:18:57.352 ************************************ 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:57.352 * Looking for test storage... 00:18:57.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:57.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.352 --rc genhtml_branch_coverage=1 00:18:57.352 --rc genhtml_function_coverage=1 00:18:57.352 --rc genhtml_legend=1 00:18:57.352 --rc geninfo_all_blocks=1 00:18:57.352 --rc geninfo_unexecuted_blocks=1 00:18:57.352 00:18:57.352 ' 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:57.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.352 --rc genhtml_branch_coverage=1 00:18:57.352 --rc genhtml_function_coverage=1 00:18:57.352 --rc genhtml_legend=1 00:18:57.352 --rc geninfo_all_blocks=1 00:18:57.352 --rc geninfo_unexecuted_blocks=1 00:18:57.352 00:18:57.352 ' 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:57.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.352 --rc genhtml_branch_coverage=1 00:18:57.352 --rc genhtml_function_coverage=1 00:18:57.352 --rc genhtml_legend=1 00:18:57.352 --rc geninfo_all_blocks=1 00:18:57.352 --rc geninfo_unexecuted_blocks=1 00:18:57.352 00:18:57.352 ' 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:57.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.352 --rc genhtml_branch_coverage=1 00:18:57.352 --rc genhtml_function_coverage=1 00:18:57.352 --rc genhtml_legend=1 00:18:57.352 --rc geninfo_all_blocks=1 00:18:57.352 --rc geninfo_unexecuted_blocks=1 00:18:57.352 00:18:57.352 ' 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.352 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.353 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.353 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:18:57.353 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:57.353 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.353 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.353 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:57.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:57.612 12:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:04.184 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:04.184 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:04.184 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:04.185 Found net devices under 0000:86:00.0: cvl_0_0 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:04.185 Found net devices under 0000:86:00.1: cvl_0_1 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:04.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:19:04.185 00:19:04.185 --- 10.0.0.2 ping statistics --- 00:19:04.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.185 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:04.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:19:04.185 00:19:04.185 --- 10.0.0.1 ping statistics --- 00:19:04.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.185 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1643386 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1643386 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1643386 ']' 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.185 [2024-12-10 12:27:25.544229] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:04.185 [2024-12-10 12:27:25.544273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.185 [2024-12-10 12:27:25.625129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.185 [2024-12-10 12:27:25.665285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.185 [2024-12-10 12:27:25.665323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.185 [2024-12-10 12:27:25.665331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.185 [2024-12-10 12:27:25.665337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.185 [2024-12-10 12:27:25.665342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.185 [2024-12-10 12:27:25.665858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:04.185 true 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:04.185 12:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.185 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:04.185 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:04.185 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:04.185 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.185 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:04.445 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:04.445 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:04.445 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:04.704 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.704 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:04.963 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:04.963 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:04.963 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:04.963 12:27:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:04.963 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:04.963 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:04.963 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:05.221 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.221 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:05.480 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:05.480 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:05.480 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:05.739 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ZoeGTnqxPY 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.naYql9N8NJ 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZoeGTnqxPY 00:19:05.998 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.naYql9N8NJ 00:19:05.999 12:27:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:06.258 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py framework_start_init 00:19:06.517 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ZoeGTnqxPY 00:19:06.517 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZoeGTnqxPY 00:19:06.517 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:06.517 [2024-12-10 12:27:28.605045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.517 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:06.776 12:27:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:07.035 [2024-12-10 12:27:29.002053] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.035 [2024-12-10 12:27:29.002294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.035 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:07.035 malloc0 00:19:07.294 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:07.294 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZoeGTnqxPY 00:19:07.561 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:07.823 12:27:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZoeGTnqxPY 00:19:17.798 Initializing NVMe Controllers 00:19:17.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:17.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:17.798 Initialization complete. Launching workers. 00:19:17.798 ======================================================== 00:19:17.798 Latency(us) 00:19:17.798 Device Information : IOPS MiB/s Average min max 00:19:17.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16396.09 64.05 3903.46 833.81 5171.03 00:19:17.798 ======================================================== 00:19:17.798 Total : 16396.09 64.05 3903.46 833.81 5171.03 00:19:17.798 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZoeGTnqxPY 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZoeGTnqxPY 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1645734 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1645734 /var/tmp/bdevperf.sock 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1645734 ']' 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.798 12:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.798 [2024-12-10 12:27:39.945919] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:17.798 [2024-12-10 12:27:39.945969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645734 ] 00:19:18.057 [2024-12-10 12:27:40.021475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.057 [2024-12-10 12:27:40.065620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.057 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.057 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:18.057 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZoeGTnqxPY 00:19:18.316 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.575 [2024-12-10 12:27:40.529566] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:18.575 TLSTESTn1 00:19:18.575 12:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:18.575 Running I/O for 10 seconds... 00:19:20.887 5394.00 IOPS, 21.07 MiB/s [2024-12-10T11:27:44.014Z] 5488.50 IOPS, 21.44 MiB/s [2024-12-10T11:27:44.950Z] 5467.00 IOPS, 21.36 MiB/s [2024-12-10T11:27:45.885Z] 5471.25 IOPS, 21.37 MiB/s [2024-12-10T11:27:46.821Z] 5478.00 IOPS, 21.40 MiB/s [2024-12-10T11:27:47.756Z] 5462.33 IOPS, 21.34 MiB/s [2024-12-10T11:27:48.788Z] 5469.43 IOPS, 21.36 MiB/s [2024-12-10T11:27:50.163Z] 5471.38 IOPS, 21.37 MiB/s [2024-12-10T11:27:51.098Z] 5482.89 IOPS, 21.42 MiB/s [2024-12-10T11:27:51.098Z] 5481.00 IOPS, 21.41 MiB/s 00:19:28.930 Latency(us) 00:19:28.930 [2024-12-10T11:27:51.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.930 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:28.930 Verification LBA range: start 0x0 length 0x2000 00:19:28.930 TLSTESTn1 : 10.01 5486.80 21.43 0.00 0.00 23294.54 4815.47 26328.38 00:19:28.930 [2024-12-10T11:27:51.098Z] =================================================================================================================== 00:19:28.930 [2024-12-10T11:27:51.098Z] Total : 5486.80 21.43 0.00 0.00 23294.54 4815.47 26328.38 00:19:28.930 { 00:19:28.930 "results": [ 00:19:28.930 { 00:19:28.930 "job": "TLSTESTn1", 00:19:28.930 "core_mask": "0x4", 00:19:28.930 "workload": "verify", 00:19:28.930 "status": "finished", 00:19:28.930 "verify_range": { 00:19:28.930 "start": 0, 00:19:28.930 "length": 8192 00:19:28.930 }, 00:19:28.930 "queue_depth": 128, 00:19:28.930 "io_size": 4096, 00:19:28.930 "runtime": 10.012572, 00:19:28.930 "iops": 5486.801992534985, 00:19:28.930 "mibps": 21.432820283339787, 00:19:28.930 "io_failed": 0, 00:19:28.930 "io_timeout": 0, 00:19:28.930 "avg_latency_us": 23294.540500288473, 00:19:28.930 "min_latency_us": 4815.471304347826, 00:19:28.930 "max_latency_us": 26328.375652173912 00:19:28.930 } 00:19:28.930 ], 00:19:28.930 "core_count": 1 00:19:28.930 } 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1645734 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1645734 ']' 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1645734 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1645734 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1645734' 00:19:28.930 killing process with pid 1645734 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1645734 00:19:28.930 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.930 00:19:28.930 Latency(us) 00:19:28.930 [2024-12-10T11:27:51.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.930 [2024-12-10T11:27:51.098Z] =================================================================================================================== 00:19:28.930 [2024-12-10T11:27:51.098Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1645734 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.naYql9N8NJ 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.naYql9N8NJ 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.naYql9N8NJ 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.naYql9N8NJ 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1647574 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1647574 /var/tmp/bdevperf.sock 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1647574 ']' 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.930 12:27:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.930 [2024-12-10 12:27:51.041666] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:28.930 [2024-12-10 12:27:51.041714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647574 ] 00:19:29.189 [2024-12-10 12:27:51.112760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.189 [2024-12-10 12:27:51.153498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.189 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.189 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:29.189 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.naYql9N8NJ 00:19:29.447 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:29.706 [2024-12-10 12:27:51.625914] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.706 [2024-12-10 12:27:51.631047] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:29.706 [2024-12-10 12:27:51.631299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1009e20 (107): Transport endpoint is not connected 00:19:29.706 [2024-12-10 12:27:51.632292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1009e20 (9): Bad file descriptor 00:19:29.706 [2024-12-10 12:27:51.633294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:29.706 [2024-12-10 12:27:51.633303] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:29.706 [2024-12-10 12:27:51.633312] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:29.706 [2024-12-10 12:27:51.633320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:29.706 request: 00:19:29.706 { 00:19:29.706 "name": "TLSTEST", 00:19:29.706 "trtype": "tcp", 00:19:29.706 "traddr": "10.0.0.2", 00:19:29.706 "adrfam": "ipv4", 00:19:29.706 "trsvcid": "4420", 00:19:29.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.706 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.706 "prchk_reftag": false, 00:19:29.706 "prchk_guard": false, 00:19:29.706 "hdgst": false, 00:19:29.706 "ddgst": false, 00:19:29.706 "psk": "key0", 00:19:29.706 "allow_unrecognized_csi": false, 00:19:29.706 "method": "bdev_nvme_attach_controller", 00:19:29.706 "req_id": 1 00:19:29.706 } 00:19:29.706 Got JSON-RPC error response 00:19:29.706 response: 00:19:29.706 { 00:19:29.706 "code": -5, 00:19:29.706 "message": "Input/output error" 00:19:29.706 } 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1647574 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1647574 ']' 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1647574 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1647574 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1647574' 00:19:29.706 killing process with pid 1647574 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1647574 00:19:29.706 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.706 00:19:29.706 Latency(us) 00:19:29.706 [2024-12-10T11:27:51.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.706 [2024-12-10T11:27:51.874Z] =================================================================================================================== 00:19:29.706 [2024-12-10T11:27:51.874Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1647574 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZoeGTnqxPY 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZoeGTnqxPY 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZoeGTnqxPY 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZoeGTnqxPY 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1647806 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1647806 /var/tmp/bdevperf.sock 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1647806 ']' 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.706 12:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.965 [2024-12-10 12:27:51.901632] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:29.966 [2024-12-10 12:27:51.901680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647806 ] 00:19:29.966 [2024-12-10 12:27:51.976615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.966 [2024-12-10 12:27:52.015769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.966 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.966 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:29.966 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZoeGTnqxPY 00:19:30.224 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:30.483 [2024-12-10 12:27:52.491806] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.483 [2024-12-10 12:27:52.503138] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:30.483 [2024-12-10 12:27:52.503170] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:30.483 [2024-12-10 12:27:52.503193] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:30.483 [2024-12-10 12:27:52.504164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x670e20 (107): Transport endpoint is not connected 00:19:30.483 [2024-12-10 12:27:52.505154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x670e20 (9): Bad file descriptor 00:19:30.483 [2024-12-10 12:27:52.506167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:30.483 [2024-12-10 12:27:52.506176] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:30.483 [2024-12-10 12:27:52.506183] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:30.483 [2024-12-10 12:27:52.506191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:30.483 request: 00:19:30.483 { 00:19:30.483 "name": "TLSTEST", 00:19:30.483 "trtype": "tcp", 00:19:30.483 "traddr": "10.0.0.2", 00:19:30.483 "adrfam": "ipv4", 00:19:30.483 "trsvcid": "4420", 00:19:30.483 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.483 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:30.483 "prchk_reftag": false, 00:19:30.483 "prchk_guard": false, 00:19:30.483 "hdgst": false, 00:19:30.483 "ddgst": false, 00:19:30.483 "psk": "key0", 00:19:30.483 "allow_unrecognized_csi": false, 00:19:30.483 "method": "bdev_nvme_attach_controller", 00:19:30.484 "req_id": 1 00:19:30.484 } 00:19:30.484 Got JSON-RPC error response 00:19:30.484 response: 00:19:30.484 { 00:19:30.484 "code": -5, 00:19:30.484 "message": "Input/output error" 00:19:30.484 } 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1647806 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1647806 ']' 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1647806 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1647806 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1647806' 00:19:30.484 killing process with pid 1647806 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1647806 00:19:30.484 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.484 00:19:30.484 Latency(us) 00:19:30.484 [2024-12-10T11:27:52.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.484 [2024-12-10T11:27:52.652Z] =================================================================================================================== 00:19:30.484 [2024-12-10T11:27:52.652Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:30.484 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1647806 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZoeGTnqxPY 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZoeGTnqxPY 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZoeGTnqxPY 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZoeGTnqxPY 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1647848 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1647848 /var/tmp/bdevperf.sock 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1647848 ']' 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.743 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.743 [2024-12-10 12:27:52.779210] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:30.743 [2024-12-10 12:27:52.779258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1647848 ] 00:19:30.743 [2024-12-10 12:27:52.855449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.743 [2024-12-10 12:27:52.896748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.002 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.002 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.002 12:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZoeGTnqxPY 00:19:31.260 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:31.260 [2024-12-10 12:27:53.344236] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.260 [2024-12-10 12:27:53.355347] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:31.260 [2024-12-10 12:27:53.355369] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:31.260 [2024-12-10 12:27:53.355391] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:31.260 [2024-12-10 12:27:53.355528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d46e20 (107): Transport endpoint is not connected 00:19:31.260 [2024-12-10 12:27:53.356521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d46e20 (9): Bad file descriptor 00:19:31.260 [2024-12-10 12:27:53.357522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:31.260 [2024-12-10 12:27:53.357532] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:31.260 [2024-12-10 12:27:53.357539] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:31.260 [2024-12-10 12:27:53.357547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:31.260 request: 00:19:31.260 { 00:19:31.260 "name": "TLSTEST", 00:19:31.260 "trtype": "tcp", 00:19:31.260 "traddr": "10.0.0.2", 00:19:31.260 "adrfam": "ipv4", 00:19:31.260 "trsvcid": "4420", 00:19:31.260 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:31.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.260 "prchk_reftag": false, 00:19:31.260 "prchk_guard": false, 00:19:31.260 "hdgst": false, 00:19:31.260 "ddgst": false, 00:19:31.260 "psk": "key0", 00:19:31.260 "allow_unrecognized_csi": false, 00:19:31.260 "method": "bdev_nvme_attach_controller", 00:19:31.260 "req_id": 1 00:19:31.260 } 00:19:31.260 Got JSON-RPC error response 00:19:31.260 response: 00:19:31.260 { 00:19:31.260 "code": -5, 00:19:31.260 "message": "Input/output error" 00:19:31.260 } 00:19:31.260 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1647848 00:19:31.260 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1647848 ']' 00:19:31.260 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1647848 00:19:31.260 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:31.260 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.260 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1647848 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1647848' 00:19:31.519 killing process with pid 1647848 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1647848 00:19:31.519 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.519 00:19:31.519 Latency(us) 00:19:31.519 [2024-12-10T11:27:53.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.519 [2024-12-10T11:27:53.687Z] =================================================================================================================== 00:19:31.519 [2024-12-10T11:27:53.687Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1647848 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1648062 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1648062 /var/tmp/bdevperf.sock 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1648062 ']' 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.519 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.519 [2024-12-10 12:27:53.637231] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:31.519 [2024-12-10 12:27:53.637280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648062 ] 00:19:31.778 [2024-12-10 12:27:53.703078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.778 [2024-12-10 12:27:53.740150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.778 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.778 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.778 12:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:32.036 [2024-12-10 12:27:54.007249] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:32.036 [2024-12-10 12:27:54.007282] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:32.036 request: 00:19:32.036 { 00:19:32.036 "name": "key0", 00:19:32.036 "path": "", 00:19:32.036 "method": "keyring_file_add_key", 00:19:32.036 "req_id": 1 00:19:32.036 } 00:19:32.036 Got JSON-RPC error response 00:19:32.036 response: 00:19:32.036 { 00:19:32.036 "code": -1, 00:19:32.036 "message": "Operation not permitted" 00:19:32.036 } 00:19:32.036 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.295 [2024-12-10 12:27:54.207862] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:32.295 [2024-12-10 12:27:54.207893] bdev_nvme.c:6755:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:32.295 request: 00:19:32.295 { 00:19:32.295 "name": "TLSTEST", 00:19:32.295 "trtype": "tcp", 00:19:32.295 "traddr": "10.0.0.2", 00:19:32.295 "adrfam": "ipv4", 00:19:32.295 "trsvcid": "4420", 00:19:32.295 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.295 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.295 "prchk_reftag": false, 00:19:32.295 "prchk_guard": false, 00:19:32.295 "hdgst": false, 00:19:32.295 "ddgst": false, 00:19:32.295 "psk": "key0", 00:19:32.295 "allow_unrecognized_csi": false, 00:19:32.295 "method": "bdev_nvme_attach_controller", 00:19:32.295 "req_id": 1 00:19:32.295 } 00:19:32.295 Got JSON-RPC error response 00:19:32.295 response: 00:19:32.295 { 00:19:32.295 "code": -126, 00:19:32.295 "message": "Required key not available" 00:19:32.295 } 00:19:32.295 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1648062 00:19:32.295 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1648062 ']' 00:19:32.295 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1648062 00:19:32.295 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:32.295 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.295 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1648062 00:19:32.295 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:32.295 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:32.295 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1648062' 00:19:32.295 killing process with pid 1648062 00:19:32.295 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1648062 00:19:32.295 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.295 00:19:32.295 Latency(us) 00:19:32.295 [2024-12-10T11:27:54.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.296 [2024-12-10T11:27:54.464Z] =================================================================================================================== 00:19:32.296 [2024-12-10T11:27:54.464Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1648062 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1643386 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1643386 ']' 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1643386 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.296 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1643386 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1643386' 00:19:32.555 killing process with pid 1643386 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1643386 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1643386 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.VY630U2buX 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.VY630U2buX 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1648307 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1648307 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1648307 ']' 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.555 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.814 [2024-12-10 12:27:54.754879] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:32.814 [2024-12-10 12:27:54.754925] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.814 [2024-12-10 12:27:54.832526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.814 [2024-12-10 12:27:54.872310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.814 [2024-12-10 12:27:54.872346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.814 [2024-12-10 12:27:54.872353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.814 [2024-12-10 12:27:54.872359] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.814 [2024-12-10 12:27:54.872365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.814 [2024-12-10 12:27:54.872930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.814 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.814 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.814 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:32.814 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.814 12:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.078 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.079 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.VY630U2buX 00:19:33.079 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VY630U2buX 00:19:33.079 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.079 [2024-12-10 12:27:55.181902] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.079 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:33.342 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:33.600 [2024-12-10 12:27:55.578924] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.600 [2024-12-10 12:27:55.579140] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.600 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:33.858 malloc0 00:19:33.858 12:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:33.858 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VY630U2buX 00:19:34.116 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VY630U2buX 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VY630U2buX 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1648566 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1648566 /var/tmp/bdevperf.sock 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1648566 ']' 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.375 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.375 [2024-12-10 12:27:56.439262] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:34.375 [2024-12-10 12:27:56.439311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648566 ] 00:19:34.375 [2024-12-10 12:27:56.514890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.632 [2024-12-10 12:27:56.555597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.632 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.632 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:34.632 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VY630U2buX 00:19:34.890 12:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.891 [2024-12-10 12:27:57.011470] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:35.149 TLSTESTn1 00:19:35.149 12:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:35.149 Running I/O for 10 seconds... 00:19:37.460 5394.00 IOPS, 21.07 MiB/s [2024-12-10T11:28:00.564Z] 5411.00 IOPS, 21.14 MiB/s [2024-12-10T11:28:01.499Z] 5442.33 IOPS, 21.26 MiB/s [2024-12-10T11:28:02.434Z] 5413.00 IOPS, 21.14 MiB/s [2024-12-10T11:28:03.368Z] 5397.80 IOPS, 21.09 MiB/s [2024-12-10T11:28:04.303Z] 5412.00 IOPS, 21.14 MiB/s [2024-12-10T11:28:05.237Z] 5414.00 IOPS, 21.15 MiB/s [2024-12-10T11:28:06.613Z] 5413.62 IOPS, 21.15 MiB/s [2024-12-10T11:28:07.549Z] 5410.22 IOPS, 21.13 MiB/s [2024-12-10T11:28:07.549Z] 5419.00 IOPS, 21.17 MiB/s 00:19:45.381 Latency(us) 00:19:45.381 [2024-12-10T11:28:07.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.381 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:45.381 Verification LBA range: start 0x0 length 0x2000 00:19:45.381 TLSTESTn1 : 10.01 5424.94 21.19 0.00 0.00 23560.64 4673.00 23478.98 00:19:45.381 [2024-12-10T11:28:07.549Z] =================================================================================================================== 00:19:45.381 [2024-12-10T11:28:07.549Z] Total : 5424.94 21.19 0.00 0.00 23560.64 4673.00 23478.98 00:19:45.381 { 00:19:45.381 "results": [ 00:19:45.381 { 00:19:45.381 "job": "TLSTESTn1", 00:19:45.381 "core_mask": "0x4", 00:19:45.381 "workload": "verify", 00:19:45.381 "status": "finished", 00:19:45.381 "verify_range": { 00:19:45.381 "start": 0, 00:19:45.381 "length": 8192 00:19:45.381 }, 00:19:45.381 "queue_depth": 128, 00:19:45.381 "io_size": 4096, 00:19:45.381 "runtime": 10.012093, 00:19:45.381 "iops": 5424.939620516909, 00:19:45.381 "mibps": 21.191170392644175, 00:19:45.381 "io_failed": 0, 00:19:45.381 "io_timeout": 0, 00:19:45.381 "avg_latency_us": 23560.641979819815, 00:19:45.381 "min_latency_us": 4673.0017391304345, 00:19:45.381 "max_latency_us": 23478.98434782609 00:19:45.381 } 00:19:45.381 ], 00:19:45.381 "core_count": 1 00:19:45.381 } 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1648566 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1648566 ']' 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1648566 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1648566 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1648566' 00:19:45.381 killing process with pid 1648566 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1648566 00:19:45.381 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.381 00:19:45.381 Latency(us) 00:19:45.381 [2024-12-10T11:28:07.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.381 [2024-12-10T11:28:07.549Z] =================================================================================================================== 00:19:45.381 [2024-12-10T11:28:07.549Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1648566 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.VY630U2buX 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VY630U2buX 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VY630U2buX 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VY630U2buX 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.VY630U2buX 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1650409 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1650409 /var/tmp/bdevperf.sock 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1650409 ']' 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:45.381 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.381 [2024-12-10 12:28:07.513184] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:45.381 [2024-12-10 12:28:07.513232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1650409 ] 00:19:45.640 [2024-12-10 12:28:07.585043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.640 [2024-12-10 12:28:07.626556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.640 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.640 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:45.640 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VY630U2buX 00:19:45.899 [2024-12-10 12:28:07.878999] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VY630U2buX': 0100666 00:19:45.899 [2024-12-10 12:28:07.879033] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:45.899 request: 00:19:45.899 { 00:19:45.899 "name": "key0", 00:19:45.899 "path": "/tmp/tmp.VY630U2buX", 00:19:45.899 "method": "keyring_file_add_key", 00:19:45.899 "req_id": 1 00:19:45.899 } 00:19:45.899 Got JSON-RPC error response 00:19:45.899 response: 00:19:45.899 { 00:19:45.899 "code": -1, 00:19:45.899 "message": "Operation not permitted" 00:19:45.899 } 00:19:45.899 12:28:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.158 [2024-12-10 12:28:08.071576] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.158 [2024-12-10 12:28:08.071605] bdev_nvme.c:6755:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:46.158 request: 00:19:46.158 { 00:19:46.158 "name": "TLSTEST", 00:19:46.158 "trtype": "tcp", 00:19:46.158 "traddr": "10.0.0.2", 00:19:46.158 "adrfam": "ipv4", 00:19:46.158 "trsvcid": "4420", 00:19:46.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.158 "prchk_reftag": false, 00:19:46.158 "prchk_guard": false, 00:19:46.158 "hdgst": false, 00:19:46.158 "ddgst": false, 00:19:46.158 "psk": "key0", 00:19:46.158 "allow_unrecognized_csi": false, 00:19:46.158 "method": "bdev_nvme_attach_controller", 00:19:46.158 "req_id": 1 00:19:46.158 } 00:19:46.158 Got JSON-RPC error response 00:19:46.158 response: 00:19:46.158 { 00:19:46.158 "code": -126, 00:19:46.158 "message": "Required key not available" 00:19:46.158 } 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1650409 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1650409 ']' 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1650409 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1650409 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1650409' 00:19:46.158 killing process with pid 1650409 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1650409 00:19:46.158 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.158 00:19:46.158 Latency(us) 00:19:46.158 [2024-12-10T11:28:08.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.158 [2024-12-10T11:28:08.326Z] =================================================================================================================== 00:19:46.158 [2024-12-10T11:28:08.326Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1650409 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1648307 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1648307 ']' 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1648307 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.158 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1648307 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1648307' 00:19:46.417 killing process with pid 1648307 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1648307 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1648307 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1650639 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1650639 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1650639 ']' 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.417 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.677 [2024-12-10 12:28:08.588286] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:46.677 [2024-12-10 12:28:08.588337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.677 [2024-12-10 12:28:08.664304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.677 [2024-12-10 12:28:08.702637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.677 [2024-12-10 12:28:08.702671] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.677 [2024-12-10 12:28:08.702677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.677 [2024-12-10 12:28:08.702683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.677 [2024-12-10 12:28:08.702688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.677 [2024-12-10 12:28:08.703204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.VY630U2buX 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.VY630U2buX 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.VY630U2buX 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VY630U2buX 00:19:46.677 12:28:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:46.935 [2024-12-10 12:28:09.010509] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.935 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.193 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:47.451 [2024-12-10 12:28:09.403510] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.451 [2024-12-10 12:28:09.403722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.451 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:47.451 malloc0 00:19:47.710 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:47.710 12:28:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VY630U2buX 00:19:47.969 [2024-12-10 12:28:10.013143] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VY630U2buX': 0100666 00:19:47.969 [2024-12-10 12:28:10.013173] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:47.969 request: 00:19:47.969 { 00:19:47.969 "name": "key0", 00:19:47.969 "path": "/tmp/tmp.VY630U2buX", 00:19:47.969 "method": "keyring_file_add_key", 00:19:47.969 "req_id": 1 00:19:47.969 } 00:19:47.969 Got JSON-RPC error response 00:19:47.969 response: 00:19:47.969 { 00:19:47.969 "code": -1, 00:19:47.969 "message": "Operation not permitted" 00:19:47.969 } 00:19:47.969 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.228 [2024-12-10 12:28:10.213691] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:48.228 [2024-12-10 12:28:10.213741] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:48.228 request: 00:19:48.228 { 00:19:48.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.228 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.228 "psk": "key0", 00:19:48.228 "method": "nvmf_subsystem_add_host", 00:19:48.228 "req_id": 1 00:19:48.228 } 00:19:48.228 Got JSON-RPC error response 00:19:48.228 response: 00:19:48.228 { 00:19:48.228 "code": -32603, 00:19:48.228 "message": "Internal error" 00:19:48.228 } 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1650639 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1650639 ']' 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1650639 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1650639 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1650639' 00:19:48.228 killing process with pid 1650639 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1650639 00:19:48.228 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1650639 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.VY630U2buX 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1650917 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1650917 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1650917 ']' 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.487 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.487 [2024-12-10 12:28:10.530181] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:48.487 [2024-12-10 12:28:10.530229] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.488 [2024-12-10 12:28:10.606374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.488 [2024-12-10 12:28:10.647600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.488 [2024-12-10 12:28:10.647633] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.488 [2024-12-10 12:28:10.647640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.488 [2024-12-10 12:28:10.647647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.488 [2024-12-10 12:28:10.647652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.488 [2024-12-10 12:28:10.648190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.746 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.746 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.746 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.746 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.746 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.746 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.746 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.VY630U2buX 00:19:48.746 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VY630U2buX 00:19:48.746 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:49.005 [2024-12-10 12:28:10.952704] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.005 12:28:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:49.263 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:49.263 [2024-12-10 12:28:11.365789] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.263 [2024-12-10 12:28:11.365989] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.263 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:49.522 malloc0 00:19:49.522 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:49.781 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VY630U2buX 00:19:50.039 12:28:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.039 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.039 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1651180 00:19:50.039 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.039 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1651180 /var/tmp/bdevperf.sock 00:19:50.039 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1651180 ']' 00:19:50.039 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.039 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.039 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.039 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.039 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.298 [2024-12-10 12:28:12.222653] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:50.298 [2024-12-10 12:28:12.222703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651180 ] 00:19:50.298 [2024-12-10 12:28:12.298558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.298 [2024-12-10 12:28:12.338209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.298 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.298 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:50.298 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VY630U2buX 00:19:50.556 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.815 [2024-12-10 12:28:12.821756] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.815 TLSTESTn1 00:19:50.815 12:28:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py save_config 00:19:51.074 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:51.074 "subsystems": [ 00:19:51.074 { 00:19:51.074 "subsystem": "keyring", 00:19:51.074 "config": [ 00:19:51.074 { 00:19:51.074 "method": "keyring_file_add_key", 00:19:51.074 "params": { 00:19:51.074 "name": "key0", 00:19:51.074 "path": "/tmp/tmp.VY630U2buX" 00:19:51.074 } 00:19:51.074 } 00:19:51.074 ] 00:19:51.074 }, 00:19:51.074 { 00:19:51.074 "subsystem": "iobuf", 00:19:51.074 "config": [ 00:19:51.074 { 00:19:51.074 "method": "iobuf_set_options", 00:19:51.074 "params": { 00:19:51.074 "small_pool_count": 8192, 00:19:51.074 "large_pool_count": 1024, 00:19:51.074 "small_bufsize": 8192, 00:19:51.074 "large_bufsize": 135168, 00:19:51.074 "enable_numa": false 00:19:51.074 } 00:19:51.074 } 00:19:51.074 ] 00:19:51.074 }, 00:19:51.074 { 00:19:51.074 "subsystem": "sock", 00:19:51.074 "config": [ 00:19:51.074 { 00:19:51.074 "method": "sock_set_default_impl", 00:19:51.074 "params": { 00:19:51.074 "impl_name": "posix" 00:19:51.074 } 00:19:51.074 }, 00:19:51.074 { 00:19:51.074 "method": "sock_impl_set_options", 00:19:51.074 "params": { 00:19:51.074 "impl_name": "ssl", 00:19:51.074 "recv_buf_size": 4096, 00:19:51.074 "send_buf_size": 4096, 00:19:51.074 "enable_recv_pipe": true, 00:19:51.074 "enable_quickack": false, 00:19:51.074 "enable_placement_id": 0, 00:19:51.074 "enable_zerocopy_send_server": true, 00:19:51.074 "enable_zerocopy_send_client": false, 00:19:51.074 "zerocopy_threshold": 0, 00:19:51.074 "tls_version": 0, 00:19:51.074 "enable_ktls": false 00:19:51.074 } 00:19:51.074 }, 00:19:51.074 { 00:19:51.074 "method": "sock_impl_set_options", 00:19:51.074 "params": { 00:19:51.074 "impl_name": "posix", 00:19:51.074 "recv_buf_size": 2097152, 00:19:51.074 "send_buf_size": 2097152, 00:19:51.074 "enable_recv_pipe": true, 00:19:51.074 "enable_quickack": false, 00:19:51.074 "enable_placement_id": 0, 00:19:51.074 "enable_zerocopy_send_server": true, 00:19:51.074 "enable_zerocopy_send_client": false, 00:19:51.074 "zerocopy_threshold": 0, 00:19:51.074 "tls_version": 0, 00:19:51.074 "enable_ktls": false 00:19:51.074 } 00:19:51.074 } 00:19:51.074 ] 00:19:51.074 }, 00:19:51.074 { 00:19:51.074 "subsystem": "vmd", 00:19:51.074 "config": [] 00:19:51.074 }, 00:19:51.074 { 00:19:51.074 "subsystem": "accel", 00:19:51.074 "config": [ 00:19:51.074 { 00:19:51.074 "method": "accel_set_options", 00:19:51.074 "params": { 00:19:51.074 "small_cache_size": 128, 00:19:51.074 "large_cache_size": 16, 00:19:51.074 "task_count": 2048, 00:19:51.074 "sequence_count": 2048, 00:19:51.074 "buf_count": 2048 00:19:51.074 } 00:19:51.074 } 00:19:51.074 ] 00:19:51.074 }, 00:19:51.074 { 00:19:51.074 "subsystem": "bdev", 00:19:51.074 "config": [ 00:19:51.074 { 00:19:51.074 "method": "bdev_set_options", 00:19:51.074 "params": { 00:19:51.075 "bdev_io_pool_size": 65535, 00:19:51.075 "bdev_io_cache_size": 256, 00:19:51.075 "bdev_auto_examine": true, 00:19:51.075 "iobuf_small_cache_size": 128, 00:19:51.075 "iobuf_large_cache_size": 16 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "bdev_raid_set_options", 00:19:51.075 "params": { 00:19:51.075 "process_window_size_kb": 1024, 00:19:51.075 "process_max_bandwidth_mb_sec": 0 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "bdev_iscsi_set_options", 00:19:51.075 "params": { 00:19:51.075 "timeout_sec": 30 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "bdev_nvme_set_options", 00:19:51.075 "params": { 00:19:51.075 "action_on_timeout": "none", 00:19:51.075 "timeout_us": 0, 00:19:51.075 "timeout_admin_us": 0, 00:19:51.075 "keep_alive_timeout_ms": 10000, 00:19:51.075 "arbitration_burst": 0, 00:19:51.075 "low_priority_weight": 0, 00:19:51.075 "medium_priority_weight": 0, 00:19:51.075 "high_priority_weight": 0, 00:19:51.075 "nvme_adminq_poll_period_us": 10000, 00:19:51.075 "nvme_ioq_poll_period_us": 0, 00:19:51.075 "io_queue_requests": 0, 00:19:51.075 "delay_cmd_submit": true, 00:19:51.075 "transport_retry_count": 4, 00:19:51.075 "bdev_retry_count": 3, 00:19:51.075 "transport_ack_timeout": 0, 00:19:51.075 "ctrlr_loss_timeout_sec": 0, 00:19:51.075 "reconnect_delay_sec": 0, 00:19:51.075 "fast_io_fail_timeout_sec": 0, 00:19:51.075 "disable_auto_failback": false, 00:19:51.075 "generate_uuids": false, 00:19:51.075 "transport_tos": 0, 00:19:51.075 "nvme_error_stat": false, 00:19:51.075 "rdma_srq_size": 0, 00:19:51.075 "io_path_stat": false, 00:19:51.075 "allow_accel_sequence": false, 00:19:51.075 "rdma_max_cq_size": 0, 00:19:51.075 "rdma_cm_event_timeout_ms": 0, 00:19:51.075 "dhchap_digests": [ 00:19:51.075 "sha256", 00:19:51.075 "sha384", 00:19:51.075 "sha512" 00:19:51.075 ], 00:19:51.075 "dhchap_dhgroups": [ 00:19:51.075 "null", 00:19:51.075 "ffdhe2048", 00:19:51.075 "ffdhe3072", 00:19:51.075 "ffdhe4096", 00:19:51.075 "ffdhe6144", 00:19:51.075 "ffdhe8192" 00:19:51.075 ] 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "bdev_nvme_set_hotplug", 00:19:51.075 "params": { 00:19:51.075 "period_us": 100000, 00:19:51.075 "enable": false 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "bdev_malloc_create", 00:19:51.075 "params": { 00:19:51.075 "name": "malloc0", 00:19:51.075 "num_blocks": 8192, 00:19:51.075 "block_size": 4096, 00:19:51.075 "physical_block_size": 4096, 00:19:51.075 "uuid": "0cb9b75b-b56b-4ec6-b415-3d5b9d7c2e33", 00:19:51.075 "optimal_io_boundary": 0, 00:19:51.075 "md_size": 0, 00:19:51.075 "dif_type": 0, 00:19:51.075 "dif_is_head_of_md": false, 00:19:51.075 "dif_pi_format": 0 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "bdev_wait_for_examine" 00:19:51.075 } 00:19:51.075 ] 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "subsystem": "nbd", 00:19:51.075 "config": [] 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "subsystem": "scheduler", 00:19:51.075 "config": [ 00:19:51.075 { 00:19:51.075 "method": "framework_set_scheduler", 00:19:51.075 "params": { 00:19:51.075 "name": "static" 00:19:51.075 } 00:19:51.075 } 00:19:51.075 ] 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "subsystem": "nvmf", 00:19:51.075 "config": [ 00:19:51.075 { 00:19:51.075 "method": "nvmf_set_config", 00:19:51.075 "params": { 00:19:51.075 "discovery_filter": "match_any", 00:19:51.075 "admin_cmd_passthru": { 00:19:51.075 "identify_ctrlr": false 00:19:51.075 }, 00:19:51.075 "dhchap_digests": [ 00:19:51.075 "sha256", 00:19:51.075 "sha384", 00:19:51.075 "sha512" 00:19:51.075 ], 00:19:51.075 "dhchap_dhgroups": [ 00:19:51.075 "null", 00:19:51.075 "ffdhe2048", 00:19:51.075 "ffdhe3072", 00:19:51.075 "ffdhe4096", 00:19:51.075 "ffdhe6144", 00:19:51.075 "ffdhe8192" 00:19:51.075 ] 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "nvmf_set_max_subsystems", 00:19:51.075 "params": { 00:19:51.075 "max_subsystems": 1024 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "nvmf_set_crdt", 00:19:51.075 "params": { 00:19:51.075 "crdt1": 0, 00:19:51.075 "crdt2": 0, 00:19:51.075 "crdt3": 0 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "nvmf_create_transport", 00:19:51.075 "params": { 00:19:51.075 "trtype": "TCP", 00:19:51.075 "max_queue_depth": 128, 00:19:51.075 "max_io_qpairs_per_ctrlr": 127, 00:19:51.075 "in_capsule_data_size": 4096, 00:19:51.075 "max_io_size": 131072, 00:19:51.075 "io_unit_size": 131072, 00:19:51.075 "max_aq_depth": 128, 00:19:51.075 "num_shared_buffers": 511, 00:19:51.075 "buf_cache_size": 4294967295, 00:19:51.075 "dif_insert_or_strip": false, 00:19:51.075 "zcopy": false, 00:19:51.075 "c2h_success": false, 00:19:51.075 "sock_priority": 0, 00:19:51.075 "abort_timeout_sec": 1, 00:19:51.075 "ack_timeout": 0, 00:19:51.075 "data_wr_pool_size": 0 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "nvmf_create_subsystem", 00:19:51.075 "params": { 00:19:51.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.075 "allow_any_host": false, 00:19:51.075 "serial_number": "SPDK00000000000001", 00:19:51.075 "model_number": "SPDK bdev Controller", 00:19:51.075 "max_namespaces": 10, 00:19:51.075 "min_cntlid": 1, 00:19:51.075 "max_cntlid": 65519, 00:19:51.075 "ana_reporting": false 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "nvmf_subsystem_add_host", 00:19:51.075 "params": { 00:19:51.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.075 "host": "nqn.2016-06.io.spdk:host1", 00:19:51.075 "psk": "key0" 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "nvmf_subsystem_add_ns", 00:19:51.075 "params": { 00:19:51.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.075 "namespace": { 00:19:51.075 "nsid": 1, 00:19:51.075 "bdev_name": "malloc0", 00:19:51.075 "nguid": "0CB9B75BB56B4EC6B4153D5B9D7C2E33", 00:19:51.075 "uuid": "0cb9b75b-b56b-4ec6-b415-3d5b9d7c2e33", 00:19:51.075 "no_auto_visible": false 00:19:51.075 } 00:19:51.075 } 00:19:51.075 }, 00:19:51.075 { 00:19:51.075 "method": "nvmf_subsystem_add_listener", 00:19:51.075 "params": { 00:19:51.075 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.075 "listen_address": { 00:19:51.075 "trtype": "TCP", 00:19:51.075 "adrfam": "IPv4", 00:19:51.075 "traddr": "10.0.0.2", 00:19:51.075 "trsvcid": "4420" 00:19:51.075 }, 00:19:51.075 "secure_channel": true 00:19:51.075 } 00:19:51.075 } 00:19:51.075 ] 00:19:51.075 } 00:19:51.075 ] 00:19:51.075 }' 00:19:51.075 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:51.335 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:51.335 "subsystems": [ 00:19:51.335 { 00:19:51.335 "subsystem": "keyring", 00:19:51.335 "config": [ 00:19:51.335 { 00:19:51.335 "method": "keyring_file_add_key", 00:19:51.335 "params": { 00:19:51.335 "name": "key0", 00:19:51.335 "path": "/tmp/tmp.VY630U2buX" 00:19:51.335 } 00:19:51.335 } 00:19:51.335 ] 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "subsystem": "iobuf", 00:19:51.335 "config": [ 00:19:51.335 { 00:19:51.335 "method": "iobuf_set_options", 00:19:51.335 "params": { 00:19:51.335 "small_pool_count": 8192, 00:19:51.335 "large_pool_count": 1024, 00:19:51.335 "small_bufsize": 8192, 00:19:51.335 "large_bufsize": 135168, 00:19:51.335 "enable_numa": false 00:19:51.335 } 00:19:51.335 } 00:19:51.335 ] 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "subsystem": "sock", 00:19:51.335 "config": [ 00:19:51.335 { 00:19:51.335 "method": "sock_set_default_impl", 00:19:51.335 "params": { 00:19:51.335 "impl_name": "posix" 00:19:51.335 } 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "method": "sock_impl_set_options", 00:19:51.335 "params": { 00:19:51.335 "impl_name": "ssl", 00:19:51.335 "recv_buf_size": 4096, 00:19:51.335 "send_buf_size": 4096, 00:19:51.335 "enable_recv_pipe": true, 00:19:51.335 "enable_quickack": false, 00:19:51.335 "enable_placement_id": 0, 00:19:51.335 "enable_zerocopy_send_server": true, 00:19:51.335 "enable_zerocopy_send_client": false, 00:19:51.335 "zerocopy_threshold": 0, 00:19:51.335 "tls_version": 0, 00:19:51.335 "enable_ktls": false 00:19:51.335 } 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "method": "sock_impl_set_options", 00:19:51.335 "params": { 00:19:51.335 "impl_name": "posix", 00:19:51.335 "recv_buf_size": 2097152, 00:19:51.335 "send_buf_size": 2097152, 00:19:51.335 "enable_recv_pipe": true, 00:19:51.335 "enable_quickack": false, 00:19:51.335 "enable_placement_id": 0, 00:19:51.335 "enable_zerocopy_send_server": true, 00:19:51.335 "enable_zerocopy_send_client": false, 00:19:51.335 "zerocopy_threshold": 0, 00:19:51.335 "tls_version": 0, 00:19:51.335 "enable_ktls": false 00:19:51.335 } 00:19:51.335 } 00:19:51.335 ] 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "subsystem": "vmd", 00:19:51.335 "config": [] 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "subsystem": "accel", 00:19:51.335 "config": [ 00:19:51.335 { 00:19:51.335 "method": "accel_set_options", 00:19:51.335 "params": { 00:19:51.335 "small_cache_size": 128, 00:19:51.335 "large_cache_size": 16, 00:19:51.335 "task_count": 2048, 00:19:51.335 "sequence_count": 2048, 00:19:51.335 "buf_count": 2048 00:19:51.335 } 00:19:51.335 } 00:19:51.335 ] 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "subsystem": "bdev", 00:19:51.335 "config": [ 00:19:51.335 { 00:19:51.335 "method": "bdev_set_options", 00:19:51.335 "params": { 00:19:51.335 "bdev_io_pool_size": 65535, 00:19:51.335 "bdev_io_cache_size": 256, 00:19:51.335 "bdev_auto_examine": true, 00:19:51.335 "iobuf_small_cache_size": 128, 00:19:51.335 "iobuf_large_cache_size": 16 00:19:51.335 } 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "method": "bdev_raid_set_options", 00:19:51.335 "params": { 00:19:51.335 "process_window_size_kb": 1024, 00:19:51.335 "process_max_bandwidth_mb_sec": 0 00:19:51.335 } 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "method": "bdev_iscsi_set_options", 00:19:51.335 "params": { 00:19:51.335 "timeout_sec": 30 00:19:51.335 } 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "method": "bdev_nvme_set_options", 00:19:51.335 "params": { 00:19:51.335 "action_on_timeout": "none", 00:19:51.335 "timeout_us": 0, 00:19:51.335 "timeout_admin_us": 0, 00:19:51.335 "keep_alive_timeout_ms": 10000, 00:19:51.335 "arbitration_burst": 0, 00:19:51.335 "low_priority_weight": 0, 00:19:51.335 "medium_priority_weight": 0, 00:19:51.335 "high_priority_weight": 0, 00:19:51.335 "nvme_adminq_poll_period_us": 10000, 00:19:51.335 "nvme_ioq_poll_period_us": 0, 00:19:51.335 "io_queue_requests": 512, 00:19:51.335 "delay_cmd_submit": true, 00:19:51.335 "transport_retry_count": 4, 00:19:51.335 "bdev_retry_count": 3, 00:19:51.335 "transport_ack_timeout": 0, 00:19:51.335 "ctrlr_loss_timeout_sec": 0, 00:19:51.335 "reconnect_delay_sec": 0, 00:19:51.335 "fast_io_fail_timeout_sec": 0, 00:19:51.335 "disable_auto_failback": false, 00:19:51.335 "generate_uuids": false, 00:19:51.335 "transport_tos": 0, 00:19:51.335 "nvme_error_stat": false, 00:19:51.335 "rdma_srq_size": 0, 00:19:51.335 "io_path_stat": false, 00:19:51.335 "allow_accel_sequence": false, 00:19:51.335 "rdma_max_cq_size": 0, 00:19:51.335 "rdma_cm_event_timeout_ms": 0, 00:19:51.335 "dhchap_digests": [ 00:19:51.335 "sha256", 00:19:51.335 "sha384", 00:19:51.335 "sha512" 00:19:51.335 ], 00:19:51.335 "dhchap_dhgroups": [ 00:19:51.335 "null", 00:19:51.335 "ffdhe2048", 00:19:51.335 "ffdhe3072", 00:19:51.335 "ffdhe4096", 00:19:51.335 "ffdhe6144", 00:19:51.335 "ffdhe8192" 00:19:51.335 ] 00:19:51.335 } 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "method": "bdev_nvme_attach_controller", 00:19:51.335 "params": { 00:19:51.335 "name": "TLSTEST", 00:19:51.335 "trtype": "TCP", 00:19:51.335 "adrfam": "IPv4", 00:19:51.335 "traddr": "10.0.0.2", 00:19:51.335 "trsvcid": "4420", 00:19:51.335 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.335 "prchk_reftag": false, 00:19:51.335 "prchk_guard": false, 00:19:51.335 "ctrlr_loss_timeout_sec": 0, 00:19:51.335 "reconnect_delay_sec": 0, 00:19:51.335 "fast_io_fail_timeout_sec": 0, 00:19:51.335 "psk": "key0", 00:19:51.335 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.335 "hdgst": false, 00:19:51.335 "ddgst": false, 00:19:51.335 "multipath": "multipath" 00:19:51.335 } 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "method": "bdev_nvme_set_hotplug", 00:19:51.335 "params": { 00:19:51.335 "period_us": 100000, 00:19:51.335 "enable": false 00:19:51.335 } 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "method": "bdev_wait_for_examine" 00:19:51.335 } 00:19:51.335 ] 00:19:51.335 }, 00:19:51.335 { 00:19:51.335 "subsystem": "nbd", 00:19:51.335 "config": [] 00:19:51.335 } 00:19:51.335 ] 00:19:51.335 }' 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1651180 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1651180 ']' 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1651180 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1651180 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1651180' 00:19:51.336 killing process with pid 1651180 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1651180 00:19:51.336 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.336 00:19:51.336 Latency(us) 00:19:51.336 [2024-12-10T11:28:13.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.336 [2024-12-10T11:28:13.504Z] =================================================================================================================== 00:19:51.336 [2024-12-10T11:28:13.504Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.336 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1651180 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1650917 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1650917 ']' 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1650917 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1650917 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1650917' 00:19:51.595 killing process with pid 1650917 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1650917 00:19:51.595 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1650917 00:19:51.854 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:51.854 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.854 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.854 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.854 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:51.854 "subsystems": [ 00:19:51.854 { 00:19:51.854 "subsystem": "keyring", 00:19:51.854 "config": [ 00:19:51.854 { 00:19:51.854 "method": "keyring_file_add_key", 00:19:51.854 "params": { 00:19:51.854 "name": "key0", 00:19:51.854 "path": "/tmp/tmp.VY630U2buX" 00:19:51.854 } 00:19:51.854 } 00:19:51.854 ] 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "subsystem": "iobuf", 00:19:51.854 "config": [ 00:19:51.854 { 00:19:51.854 "method": "iobuf_set_options", 00:19:51.854 "params": { 00:19:51.854 "small_pool_count": 8192, 00:19:51.854 "large_pool_count": 1024, 00:19:51.854 "small_bufsize": 8192, 00:19:51.854 "large_bufsize": 135168, 00:19:51.854 "enable_numa": false 00:19:51.854 } 00:19:51.854 } 00:19:51.854 ] 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "subsystem": "sock", 00:19:51.854 "config": [ 00:19:51.854 { 00:19:51.854 "method": "sock_set_default_impl", 00:19:51.854 "params": { 00:19:51.854 "impl_name": "posix" 00:19:51.854 } 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "method": "sock_impl_set_options", 00:19:51.854 "params": { 00:19:51.854 "impl_name": "ssl", 00:19:51.854 "recv_buf_size": 4096, 00:19:51.854 "send_buf_size": 4096, 00:19:51.854 "enable_recv_pipe": true, 00:19:51.854 "enable_quickack": false, 00:19:51.854 "enable_placement_id": 0, 00:19:51.854 "enable_zerocopy_send_server": true, 00:19:51.854 "enable_zerocopy_send_client": false, 00:19:51.854 "zerocopy_threshold": 0, 00:19:51.854 "tls_version": 0, 00:19:51.854 "enable_ktls": false 00:19:51.854 } 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "method": "sock_impl_set_options", 00:19:51.854 "params": { 00:19:51.854 "impl_name": "posix", 00:19:51.854 "recv_buf_size": 2097152, 00:19:51.854 "send_buf_size": 2097152, 00:19:51.854 "enable_recv_pipe": true, 00:19:51.854 "enable_quickack": false, 00:19:51.854 "enable_placement_id": 0, 00:19:51.854 "enable_zerocopy_send_server": true, 00:19:51.854 "enable_zerocopy_send_client": false, 00:19:51.854 "zerocopy_threshold": 0, 00:19:51.854 "tls_version": 0, 00:19:51.854 "enable_ktls": false 00:19:51.854 } 00:19:51.854 } 00:19:51.854 ] 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "subsystem": "vmd", 00:19:51.854 "config": [] 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "subsystem": "accel", 00:19:51.854 "config": [ 00:19:51.854 { 00:19:51.854 "method": "accel_set_options", 00:19:51.854 "params": { 00:19:51.854 "small_cache_size": 128, 00:19:51.854 "large_cache_size": 16, 00:19:51.854 "task_count": 2048, 00:19:51.854 "sequence_count": 2048, 00:19:51.854 "buf_count": 2048 00:19:51.854 } 00:19:51.854 } 00:19:51.854 ] 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "subsystem": "bdev", 00:19:51.854 "config": [ 00:19:51.854 { 00:19:51.854 "method": "bdev_set_options", 00:19:51.854 "params": { 00:19:51.854 "bdev_io_pool_size": 65535, 00:19:51.854 "bdev_io_cache_size": 256, 00:19:51.854 "bdev_auto_examine": true, 00:19:51.854 "iobuf_small_cache_size": 128, 00:19:51.854 "iobuf_large_cache_size": 16 00:19:51.854 } 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "method": "bdev_raid_set_options", 00:19:51.854 "params": { 00:19:51.854 "process_window_size_kb": 1024, 00:19:51.854 "process_max_bandwidth_mb_sec": 0 00:19:51.854 } 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "method": "bdev_iscsi_set_options", 00:19:51.854 "params": { 00:19:51.854 "timeout_sec": 30 00:19:51.854 } 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "method": "bdev_nvme_set_options", 00:19:51.854 "params": { 00:19:51.854 "action_on_timeout": "none", 00:19:51.854 "timeout_us": 0, 00:19:51.854 "timeout_admin_us": 0, 00:19:51.854 "keep_alive_timeout_ms": 10000, 00:19:51.854 "arbitration_burst": 0, 00:19:51.854 "low_priority_weight": 0, 00:19:51.854 "medium_priority_weight": 0, 00:19:51.854 "high_priority_weight": 0, 00:19:51.854 "nvme_adminq_poll_period_us": 10000, 00:19:51.854 "nvme_ioq_poll_period_us": 0, 00:19:51.854 "io_queue_requests": 0, 00:19:51.854 "delay_cmd_submit": true, 00:19:51.854 "transport_retry_count": 4, 00:19:51.854 "bdev_retry_count": 3, 00:19:51.854 "transport_ack_timeout": 0, 00:19:51.854 "ctrlr_loss_timeout_sec": 0, 00:19:51.854 "reconnect_delay_sec": 0, 00:19:51.854 "fast_io_fail_timeout_sec": 0, 00:19:51.854 "disable_auto_failback": false, 00:19:51.854 "generate_uuids": false, 00:19:51.854 "transport_tos": 0, 00:19:51.854 "nvme_error_stat": false, 00:19:51.854 "rdma_srq_size": 0, 00:19:51.854 "io_path_stat": false, 00:19:51.854 "allow_accel_sequence": false, 00:19:51.854 "rdma_max_cq_size": 0, 00:19:51.854 "rdma_cm_event_timeout_ms": 0, 00:19:51.854 "dhchap_digests": [ 00:19:51.854 "sha256", 00:19:51.854 "sha384", 00:19:51.854 "sha512" 00:19:51.854 ], 00:19:51.854 "dhchap_dhgroups": [ 00:19:51.854 "null", 00:19:51.854 "ffdhe2048", 00:19:51.854 "ffdhe3072", 00:19:51.854 "ffdhe4096", 00:19:51.854 "ffdhe6144", 00:19:51.854 "ffdhe8192" 00:19:51.854 ] 00:19:51.854 } 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "method": "bdev_nvme_set_hotplug", 00:19:51.854 "params": { 00:19:51.854 "period_us": 100000, 00:19:51.854 "enable": false 00:19:51.854 } 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "method": "bdev_malloc_create", 00:19:51.854 "params": { 00:19:51.854 "name": "malloc0", 00:19:51.854 "num_blocks": 8192, 00:19:51.854 "block_size": 4096, 00:19:51.854 "physical_block_size": 4096, 00:19:51.854 "uuid": "0cb9b75b-b56b-4ec6-b415-3d5b9d7c2e33", 00:19:51.854 "optimal_io_boundary": 0, 00:19:51.854 "md_size": 0, 00:19:51.854 "dif_type": 0, 00:19:51.854 "dif_is_head_of_md": false, 00:19:51.854 "dif_pi_format": 0 00:19:51.854 } 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "method": "bdev_wait_for_examine" 00:19:51.854 } 00:19:51.854 ] 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "subsystem": "nbd", 00:19:51.854 "config": [] 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "subsystem": "scheduler", 00:19:51.854 "config": [ 00:19:51.854 { 00:19:51.854 "method": "framework_set_scheduler", 00:19:51.854 "params": { 00:19:51.854 "name": "static" 00:19:51.854 } 00:19:51.854 } 00:19:51.854 ] 00:19:51.854 }, 00:19:51.854 { 00:19:51.854 "subsystem": "nvmf", 00:19:51.854 "config": [ 00:19:51.854 { 00:19:51.854 "method": "nvmf_set_config", 00:19:51.854 "params": { 00:19:51.854 "discovery_filter": "match_any", 00:19:51.854 "admin_cmd_passthru": { 00:19:51.855 "identify_ctrlr": false 00:19:51.855 }, 00:19:51.855 "dhchap_digests": [ 00:19:51.855 "sha256", 00:19:51.855 "sha384", 00:19:51.855 "sha512" 00:19:51.855 ], 00:19:51.855 "dhchap_dhgroups": [ 00:19:51.855 "null", 00:19:51.855 "ffdhe2048", 00:19:51.855 "ffdhe3072", 00:19:51.855 "ffdhe4096", 00:19:51.855 "ffdhe6144", 00:19:51.855 "ffdhe8192" 00:19:51.855 ] 00:19:51.855 } 00:19:51.855 }, 00:19:51.855 { 00:19:51.855 "method": "nvmf_set_max_subsystems", 00:19:51.855 "params": { 00:19:51.855 "max_subsystems": 1024 00:19:51.855 } 00:19:51.855 }, 00:19:51.855 { 00:19:51.855 "method": "nvmf_set_crdt", 00:19:51.855 "params": { 00:19:51.855 "crdt1": 0, 00:19:51.855 "crdt2": 0, 00:19:51.855 "crdt3": 0 00:19:51.855 } 00:19:51.855 }, 00:19:51.855 { 00:19:51.855 "method": "nvmf_create_transport", 00:19:51.855 "params": { 00:19:51.855 "trtype": "TCP", 00:19:51.855 "max_queue_depth": 128, 00:19:51.855 "max_io_qpairs_per_ctrlr": 127, 00:19:51.855 "in_capsule_data_size": 4096, 00:19:51.855 "max_io_size": 131072, 00:19:51.855 "io_unit_size": 131072, 00:19:51.855 "max_aq_depth": 128, 00:19:51.855 "num_shared_buffers": 511, 00:19:51.855 "buf_cache_size": 4294967295, 00:19:51.855 "dif_insert_or_strip": false, 00:19:51.855 "zcopy": false, 00:19:51.855 "c2h_success": false, 00:19:51.855 "sock_priority": 0, 00:19:51.855 "abort_timeout_sec": 1, 00:19:51.855 "ack_timeout": 0, 00:19:51.855 "data_wr_pool_size": 0 00:19:51.855 } 00:19:51.855 }, 00:19:51.855 { 00:19:51.855 "method": "nvmf_create_subsystem", 00:19:51.855 "params": { 00:19:51.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.855 "allow_any_host": false, 00:19:51.855 "serial_number": "SPDK00000000000001", 00:19:51.855 "model_number": "SPDK bdev Controller", 00:19:51.855 "max_namespaces": 10, 00:19:51.855 "min_cntlid": 1, 00:19:51.855 "max_cntlid": 65519, 00:19:51.855 "ana_reporting": false 00:19:51.855 } 00:19:51.855 }, 00:19:51.855 { 00:19:51.855 "method": "nvmf_subsystem_add_host", 00:19:51.855 "params": { 00:19:51.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.855 "host": "nqn.2016-06.io.spdk:host1", 00:19:51.855 "psk": "key0" 00:19:51.855 } 00:19:51.855 }, 00:19:51.855 { 00:19:51.855 "method": "nvmf_subsystem_add_ns", 00:19:51.855 "params": { 00:19:51.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.855 "namespace": { 00:19:51.855 "nsid": 1, 00:19:51.855 "bdev_name": "malloc0", 00:19:51.855 "nguid": "0CB9B75BB56B4EC6B4153D5B9D7C2E33", 00:19:51.855 "uuid": "0cb9b75b-b56b-4ec6-b415-3d5b9d7c2e33", 00:19:51.855 "no_auto_visible": false 00:19:51.855 } 00:19:51.855 } 00:19:51.855 }, 00:19:51.855 { 00:19:51.855 "method": "nvmf_subsystem_add_listener", 00:19:51.855 "params": { 00:19:51.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.855 "listen_address": { 00:19:51.855 "trtype": "TCP", 00:19:51.855 "adrfam": "IPv4", 00:19:51.855 "traddr": "10.0.0.2", 00:19:51.855 "trsvcid": "4420" 00:19:51.855 }, 00:19:51.855 "secure_channel": true 00:19:51.855 } 00:19:51.855 } 00:19:51.855 ] 00:19:51.855 } 00:19:51.855 ] 00:19:51.855 }' 00:19:51.855 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1651478 00:19:51.855 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1651478 00:19:51.855 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:51.855 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1651478 ']' 00:19:51.855 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.855 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.855 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.855 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.855 12:28:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.855 [2024-12-10 12:28:13.941572] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:51.855 [2024-12-10 12:28:13.941621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.113 [2024-12-10 12:28:14.021412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.113 [2024-12-10 12:28:14.060498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.113 [2024-12-10 12:28:14.060536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.113 [2024-12-10 12:28:14.060543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.113 [2024-12-10 12:28:14.060549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.113 [2024-12-10 12:28:14.060553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.113 [2024-12-10 12:28:14.061125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.113 [2024-12-10 12:28:14.273707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.372 [2024-12-10 12:28:14.305735] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.372 [2024-12-10 12:28:14.305939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.630 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.630 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:52.630 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:52.630 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.630 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1651676 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1651676 /var/tmp/bdevperf.sock 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1651676 ']' 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:52.890 "subsystems": [ 00:19:52.890 { 00:19:52.890 "subsystem": "keyring", 00:19:52.890 "config": [ 00:19:52.890 { 00:19:52.890 "method": "keyring_file_add_key", 00:19:52.890 "params": { 00:19:52.890 "name": "key0", 00:19:52.890 "path": "/tmp/tmp.VY630U2buX" 00:19:52.890 } 00:19:52.890 } 00:19:52.890 ] 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "subsystem": "iobuf", 00:19:52.890 "config": [ 00:19:52.890 { 00:19:52.890 "method": "iobuf_set_options", 00:19:52.890 "params": { 00:19:52.890 "small_pool_count": 8192, 00:19:52.890 "large_pool_count": 1024, 00:19:52.890 "small_bufsize": 8192, 00:19:52.890 "large_bufsize": 135168, 00:19:52.890 "enable_numa": false 00:19:52.890 } 00:19:52.890 } 00:19:52.890 ] 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "subsystem": "sock", 00:19:52.890 "config": [ 00:19:52.890 { 00:19:52.890 "method": "sock_set_default_impl", 00:19:52.890 "params": { 00:19:52.890 "impl_name": "posix" 00:19:52.890 } 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "method": "sock_impl_set_options", 00:19:52.890 "params": { 00:19:52.890 "impl_name": "ssl", 00:19:52.890 "recv_buf_size": 4096, 00:19:52.890 "send_buf_size": 4096, 00:19:52.890 "enable_recv_pipe": true, 00:19:52.890 "enable_quickack": false, 00:19:52.890 "enable_placement_id": 0, 00:19:52.890 "enable_zerocopy_send_server": true, 00:19:52.890 "enable_zerocopy_send_client": false, 00:19:52.890 "zerocopy_threshold": 0, 00:19:52.890 "tls_version": 0, 00:19:52.890 "enable_ktls": false 00:19:52.890 } 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "method": "sock_impl_set_options", 00:19:52.890 "params": { 00:19:52.890 "impl_name": "posix", 00:19:52.890 "recv_buf_size": 2097152, 00:19:52.890 "send_buf_size": 2097152, 00:19:52.890 "enable_recv_pipe": true, 00:19:52.890 "enable_quickack": false, 00:19:52.890 "enable_placement_id": 0, 00:19:52.890 "enable_zerocopy_send_server": true, 00:19:52.890 "enable_zerocopy_send_client": false, 00:19:52.890 "zerocopy_threshold": 0, 00:19:52.890 "tls_version": 0, 00:19:52.890 "enable_ktls": false 00:19:52.890 } 00:19:52.890 } 00:19:52.890 ] 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "subsystem": "vmd", 00:19:52.890 "config": [] 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "subsystem": "accel", 00:19:52.890 "config": [ 00:19:52.890 { 00:19:52.890 "method": "accel_set_options", 00:19:52.890 "params": { 00:19:52.890 "small_cache_size": 128, 00:19:52.890 "large_cache_size": 16, 00:19:52.890 "task_count": 2048, 00:19:52.890 "sequence_count": 2048, 00:19:52.890 "buf_count": 2048 00:19:52.890 } 00:19:52.890 } 00:19:52.890 ] 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "subsystem": "bdev", 00:19:52.890 "config": [ 00:19:52.890 { 00:19:52.890 "method": "bdev_set_options", 00:19:52.890 "params": { 00:19:52.890 "bdev_io_pool_size": 65535, 00:19:52.890 "bdev_io_cache_size": 256, 00:19:52.890 "bdev_auto_examine": true, 00:19:52.890 "iobuf_small_cache_size": 128, 00:19:52.890 "iobuf_large_cache_size": 16 00:19:52.890 } 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "method": "bdev_raid_set_options", 00:19:52.890 "params": { 00:19:52.890 "process_window_size_kb": 1024, 00:19:52.890 "process_max_bandwidth_mb_sec": 0 00:19:52.890 } 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "method": "bdev_iscsi_set_options", 00:19:52.890 "params": { 00:19:52.890 "timeout_sec": 30 00:19:52.890 } 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "method": "bdev_nvme_set_options", 00:19:52.890 "params": { 00:19:52.890 "action_on_timeout": "none", 00:19:52.890 "timeout_us": 0, 00:19:52.890 "timeout_admin_us": 0, 00:19:52.890 "keep_alive_timeout_ms": 10000, 00:19:52.890 "arbitration_burst": 0, 00:19:52.890 "low_priority_weight": 0, 00:19:52.890 "medium_priority_weight": 0, 00:19:52.890 "high_priority_weight": 0, 00:19:52.890 "nvme_adminq_poll_period_us": 10000, 00:19:52.890 "nvme_ioq_poll_period_us": 0, 00:19:52.890 "io_queue_requests": 512, 00:19:52.890 "delay_cmd_submit": true, 00:19:52.890 "transport_retry_count": 4, 00:19:52.890 "bdev_retry_count": 3, 00:19:52.890 "transport_ack_timeout": 0, 00:19:52.890 "ctrlr_loss_timeout_sec": 0, 00:19:52.890 "reconnect_delay_sec": 0, 00:19:52.890 "fast_io_fail_timeout_sec": 0, 00:19:52.890 "disable_auto_failback": false, 00:19:52.890 "generate_uuids": false, 00:19:52.890 "transport_tos": 0, 00:19:52.890 "nvme_error_stat": false, 00:19:52.890 "rdma_srq_size": 0, 00:19:52.890 "io_path_stat": false, 00:19:52.890 "allow_accel_sequence": false, 00:19:52.890 "rdma_max_cq_size": 0, 00:19:52.890 "rdma_cm_event_timeout_ms": 0, 00:19:52.890 "dhchap_digests": [ 00:19:52.890 "sha256", 00:19:52.890 "sha384", 00:19:52.890 "sha512" 00:19:52.890 ], 00:19:52.890 "dhchap_dhgroups": [ 00:19:52.890 "null", 00:19:52.890 "ffdhe2048", 00:19:52.890 "ffdhe3072", 00:19:52.890 "ffdhe4096", 00:19:52.890 "ffdhe6144", 00:19:52.890 "ffdhe8192" 00:19:52.890 ] 00:19:52.890 } 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "method": "bdev_nvme_attach_controller", 00:19:52.890 "params": { 00:19:52.890 "name": "TLSTEST", 00:19:52.890 "trtype": "TCP", 00:19:52.890 "adrfam": "IPv4", 00:19:52.890 "traddr": "10.0.0.2", 00:19:52.890 "trsvcid": "4420", 00:19:52.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.890 "prchk_reftag": false, 00:19:52.890 "prchk_guard": false, 00:19:52.890 "ctrlr_loss_timeout_sec": 0, 00:19:52.890 "reconnect_delay_sec": 0, 00:19:52.890 "fast_io_fail_timeout_sec": 0, 00:19:52.890 "psk": "key0", 00:19:52.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.890 "hdgst": false, 00:19:52.890 "ddgst": false, 00:19:52.890 "multipath": "multipath" 00:19:52.890 } 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "method": "bdev_nvme_set_hotplug", 00:19:52.890 "params": { 00:19:52.890 "period_us": 100000, 00:19:52.890 "enable": false 00:19:52.890 } 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "method": "bdev_wait_for_examine" 00:19:52.890 } 00:19:52.890 ] 00:19:52.890 }, 00:19:52.890 { 00:19:52.890 "subsystem": "nbd", 00:19:52.890 "config": [] 00:19:52.890 } 00:19:52.890 ] 00:19:52.890 }' 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.890 12:28:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.890 [2024-12-10 12:28:14.872735] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:19:52.890 [2024-12-10 12:28:14.872780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1651676 ] 00:19:52.890 [2024-12-10 12:28:14.947863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.890 [2024-12-10 12:28:14.990838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.153 [2024-12-10 12:28:15.144483] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.719 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.719 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:53.719 12:28:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:53.719 Running I/O for 10 seconds... 00:19:56.033 5182.00 IOPS, 20.24 MiB/s [2024-12-10T11:28:19.136Z] 5318.00 IOPS, 20.77 MiB/s [2024-12-10T11:28:20.072Z] 5384.00 IOPS, 21.03 MiB/s [2024-12-10T11:28:21.008Z] 5409.50 IOPS, 21.13 MiB/s [2024-12-10T11:28:21.944Z] 5405.60 IOPS, 21.12 MiB/s [2024-12-10T11:28:22.880Z] 5429.67 IOPS, 21.21 MiB/s [2024-12-10T11:28:24.257Z] 5443.71 IOPS, 21.26 MiB/s [2024-12-10T11:28:25.193Z] 5449.88 IOPS, 21.29 MiB/s [2024-12-10T11:28:26.128Z] 5440.67 IOPS, 21.25 MiB/s [2024-12-10T11:28:26.128Z] 5449.00 IOPS, 21.29 MiB/s 00:20:03.960 Latency(us) 00:20:03.960 [2024-12-10T11:28:26.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.960 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:03.960 Verification LBA range: start 0x0 length 0x2000 00:20:03.960 TLSTESTn1 : 10.02 5451.42 21.29 0.00 0.00 23439.98 5271.37 46730.02 00:20:03.960 [2024-12-10T11:28:26.128Z] =================================================================================================================== 00:20:03.960 [2024-12-10T11:28:26.128Z] Total : 5451.42 21.29 0.00 0.00 23439.98 5271.37 46730.02 00:20:03.960 { 00:20:03.960 "results": [ 00:20:03.960 { 00:20:03.960 "job": "TLSTESTn1", 00:20:03.960 "core_mask": "0x4", 00:20:03.960 "workload": "verify", 00:20:03.960 "status": "finished", 00:20:03.960 "verify_range": { 00:20:03.960 "start": 0, 00:20:03.960 "length": 8192 00:20:03.960 }, 00:20:03.960 "queue_depth": 128, 00:20:03.960 "io_size": 4096, 00:20:03.960 "runtime": 10.018668, 00:20:03.960 "iops": 5451.423283015267, 00:20:03.960 "mibps": 21.294622199278386, 00:20:03.960 "io_failed": 0, 00:20:03.960 "io_timeout": 0, 00:20:03.960 "avg_latency_us": 23439.98129575025, 00:20:03.960 "min_latency_us": 5271.373913043478, 00:20:03.960 "max_latency_us": 46730.01739130435 00:20:03.960 } 00:20:03.960 ], 00:20:03.960 "core_count": 1 00:20:03.960 } 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1651676 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1651676 ']' 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1651676 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1651676 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1651676' 00:20:03.960 killing process with pid 1651676 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1651676 00:20:03.960 Received shutdown signal, test time was about 10.000000 seconds 00:20:03.960 00:20:03.960 Latency(us) 00:20:03.960 [2024-12-10T11:28:26.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.960 [2024-12-10T11:28:26.128Z] =================================================================================================================== 00:20:03.960 [2024-12-10T11:28:26.128Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.960 12:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1651676 00:20:03.960 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1651478 00:20:03.960 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1651478 ']' 00:20:03.960 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1651478 00:20:03.960 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:03.960 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.960 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1651478 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1651478' 00:20:04.219 killing process with pid 1651478 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1651478 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1651478 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1653521 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1653521 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1653521 ']' 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.219 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.220 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.220 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.220 [2024-12-10 12:28:26.373631] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:04.220 [2024-12-10 12:28:26.373679] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.479 [2024-12-10 12:28:26.452843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.479 [2024-12-10 12:28:26.489031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.479 [2024-12-10 12:28:26.489067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.479 [2024-12-10 12:28:26.489074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.479 [2024-12-10 12:28:26.489080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.479 [2024-12-10 12:28:26.489106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.479 [2024-12-10 12:28:26.489665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.479 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.479 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:04.479 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.479 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.479 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.479 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.479 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.VY630U2buX 00:20:04.479 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.VY630U2buX 00:20:04.479 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:04.738 [2024-12-10 12:28:26.806041] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.738 12:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:04.997 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:05.255 [2024-12-10 12:28:27.170970] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:05.256 [2024-12-10 12:28:27.171193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.256 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:05.256 malloc0 00:20:05.256 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:05.515 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.VY630U2buX 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1653775 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1653775 /var/tmp/bdevperf.sock 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1653775 ']' 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.773 12:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.032 [2024-12-10 12:28:27.968614] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:06.032 [2024-12-10 12:28:27.968663] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653775 ] 00:20:06.032 [2024-12-10 12:28:28.043947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.032 [2024-12-10 12:28:28.083594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.032 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.032 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:06.032 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VY630U2buX 00:20:06.290 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:06.549 [2024-12-10 12:28:28.532650] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.549 nvme0n1 00:20:06.549 12:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.810 Running I/O for 1 seconds... 00:20:07.844 5368.00 IOPS, 20.97 MiB/s 00:20:07.844 Latency(us) 00:20:07.844 [2024-12-10T11:28:30.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.844 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:07.844 Verification LBA range: start 0x0 length 0x2000 00:20:07.844 nvme0n1 : 1.02 5405.18 21.11 0.00 0.00 23492.91 5071.92 19945.74 00:20:07.844 [2024-12-10T11:28:30.012Z] =================================================================================================================== 00:20:07.844 [2024-12-10T11:28:30.012Z] Total : 5405.18 21.11 0.00 0.00 23492.91 5071.92 19945.74 00:20:07.844 { 00:20:07.844 "results": [ 00:20:07.844 { 00:20:07.844 "job": "nvme0n1", 00:20:07.844 "core_mask": "0x2", 00:20:07.844 "workload": "verify", 00:20:07.844 "status": "finished", 00:20:07.844 "verify_range": { 00:20:07.844 "start": 0, 00:20:07.844 "length": 8192 00:20:07.844 }, 00:20:07.844 "queue_depth": 128, 00:20:07.844 "io_size": 4096, 00:20:07.844 "runtime": 1.016803, 00:20:07.844 "iops": 5405.176813994452, 00:20:07.844 "mibps": 21.113971929665826, 00:20:07.844 "io_failed": 0, 00:20:07.844 "io_timeout": 0, 00:20:07.844 "avg_latency_us": 23492.91245870514, 00:20:07.844 "min_latency_us": 5071.91652173913, 00:20:07.844 "max_latency_us": 19945.739130434784 00:20:07.844 } 00:20:07.844 ], 00:20:07.844 "core_count": 1 00:20:07.844 } 00:20:07.844 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1653775 00:20:07.844 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1653775 ']' 00:20:07.844 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1653775 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1653775 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1653775' 00:20:07.845 killing process with pid 1653775 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1653775 00:20:07.845 Received shutdown signal, test time was about 1.000000 seconds 00:20:07.845 00:20:07.845 Latency(us) 00:20:07.845 [2024-12-10T11:28:30.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.845 [2024-12-10T11:28:30.013Z] =================================================================================================================== 00:20:07.845 [2024-12-10T11:28:30.013Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1653775 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1653521 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1653521 ']' 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1653521 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.845 12:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1653521 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1653521' 00:20:08.104 killing process with pid 1653521 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1653521 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1653521 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1654245 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1654245 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1654245 ']' 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.104 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.104 [2024-12-10 12:28:30.267689] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:08.104 [2024-12-10 12:28:30.267737] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.363 [2024-12-10 12:28:30.348399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.363 [2024-12-10 12:28:30.388746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.363 [2024-12-10 12:28:30.388780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.363 [2024-12-10 12:28:30.388788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.363 [2024-12-10 12:28:30.388798] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.363 [2024-12-10 12:28:30.388803] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.363 [2024-12-10 12:28:30.389321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.363 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.363 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:08.363 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:08.363 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.363 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.363 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.363 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:08.363 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.363 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.363 [2024-12-10 12:28:30.525755] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.621 malloc0 00:20:08.621 [2024-12-10 12:28:30.553819] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:08.622 [2024-12-10 12:28:30.554032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.622 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.622 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1654274 00:20:08.622 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:08.622 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1654274 /var/tmp/bdevperf.sock 00:20:08.622 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1654274 ']' 00:20:08.622 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:08.622 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.622 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:08.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:08.622 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.622 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.622 [2024-12-10 12:28:30.628258] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:08.622 [2024-12-10 12:28:30.628297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654274 ] 00:20:08.622 [2024-12-10 12:28:30.703503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.622 [2024-12-10 12:28:30.745012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.880 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.880 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:08.880 12:28:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.VY630U2buX 00:20:08.880 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:09.138 [2024-12-10 12:28:31.185676] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.138 nvme0n1 00:20:09.138 12:28:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.396 Running I/O for 1 seconds... 00:20:10.330 5325.00 IOPS, 20.80 MiB/s 00:20:10.330 Latency(us) 00:20:10.330 [2024-12-10T11:28:32.498Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.330 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:10.330 Verification LBA range: start 0x0 length 0x2000 00:20:10.330 nvme0n1 : 1.02 5323.12 20.79 0.00 0.00 23811.33 6297.15 28835.84 00:20:10.330 [2024-12-10T11:28:32.498Z] =================================================================================================================== 00:20:10.330 [2024-12-10T11:28:32.498Z] Total : 5323.12 20.79 0.00 0.00 23811.33 6297.15 28835.84 00:20:10.330 { 00:20:10.330 "results": [ 00:20:10.330 { 00:20:10.330 "job": "nvme0n1", 00:20:10.330 "core_mask": "0x2", 00:20:10.330 "workload": "verify", 00:20:10.330 "status": "finished", 00:20:10.330 "verify_range": { 00:20:10.330 "start": 0, 00:20:10.330 "length": 8192 00:20:10.330 }, 00:20:10.330 "queue_depth": 128, 00:20:10.330 "io_size": 4096, 00:20:10.330 "runtime": 1.024588, 00:20:10.330 "iops": 5323.115242419392, 00:20:10.330 "mibps": 20.79341891570075, 00:20:10.330 "io_failed": 0, 00:20:10.330 "io_timeout": 0, 00:20:10.330 "avg_latency_us": 23811.332355032606, 00:20:10.330 "min_latency_us": 6297.154782608695, 00:20:10.330 "max_latency_us": 28835.84 00:20:10.330 } 00:20:10.330 ], 00:20:10.330 "core_count": 1 00:20:10.330 } 00:20:10.330 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:10.330 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.330 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.589 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.589 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:10.589 "subsystems": [ 00:20:10.589 { 00:20:10.589 "subsystem": "keyring", 00:20:10.589 "config": [ 00:20:10.589 { 00:20:10.589 "method": "keyring_file_add_key", 00:20:10.589 "params": { 00:20:10.589 "name": "key0", 00:20:10.589 "path": "/tmp/tmp.VY630U2buX" 00:20:10.589 } 00:20:10.589 } 00:20:10.589 ] 00:20:10.589 }, 00:20:10.589 { 00:20:10.589 "subsystem": "iobuf", 00:20:10.589 "config": [ 00:20:10.589 { 00:20:10.589 "method": "iobuf_set_options", 00:20:10.589 "params": { 00:20:10.589 "small_pool_count": 8192, 00:20:10.589 "large_pool_count": 1024, 00:20:10.589 "small_bufsize": 8192, 00:20:10.589 "large_bufsize": 135168, 00:20:10.589 "enable_numa": false 00:20:10.589 } 00:20:10.589 } 00:20:10.589 ] 00:20:10.589 }, 00:20:10.589 { 00:20:10.589 "subsystem": "sock", 00:20:10.589 "config": [ 00:20:10.589 { 00:20:10.589 "method": "sock_set_default_impl", 00:20:10.589 "params": { 00:20:10.589 "impl_name": "posix" 00:20:10.589 } 00:20:10.589 }, 00:20:10.589 { 00:20:10.589 "method": "sock_impl_set_options", 00:20:10.589 "params": { 00:20:10.589 "impl_name": "ssl", 00:20:10.589 "recv_buf_size": 4096, 00:20:10.589 "send_buf_size": 4096, 00:20:10.589 "enable_recv_pipe": true, 00:20:10.589 "enable_quickack": false, 00:20:10.589 "enable_placement_id": 0, 00:20:10.589 "enable_zerocopy_send_server": true, 00:20:10.589 "enable_zerocopy_send_client": false, 00:20:10.589 "zerocopy_threshold": 0, 00:20:10.589 "tls_version": 0, 00:20:10.589 "enable_ktls": false 00:20:10.589 } 00:20:10.589 }, 00:20:10.589 { 00:20:10.589 "method": "sock_impl_set_options", 00:20:10.589 "params": { 00:20:10.589 "impl_name": "posix", 00:20:10.589 "recv_buf_size": 2097152, 00:20:10.589 "send_buf_size": 2097152, 00:20:10.589 "enable_recv_pipe": true, 00:20:10.589 "enable_quickack": false, 00:20:10.589 "enable_placement_id": 0, 00:20:10.589 "enable_zerocopy_send_server": true, 00:20:10.589 "enable_zerocopy_send_client": false, 00:20:10.589 "zerocopy_threshold": 0, 00:20:10.589 "tls_version": 0, 00:20:10.589 "enable_ktls": false 00:20:10.589 } 00:20:10.589 } 00:20:10.589 ] 00:20:10.589 }, 00:20:10.589 { 00:20:10.589 "subsystem": "vmd", 00:20:10.589 "config": [] 00:20:10.589 }, 00:20:10.589 { 00:20:10.589 "subsystem": "accel", 00:20:10.589 "config": [ 00:20:10.589 { 00:20:10.589 "method": "accel_set_options", 00:20:10.589 "params": { 00:20:10.589 "small_cache_size": 128, 00:20:10.589 "large_cache_size": 16, 00:20:10.589 "task_count": 2048, 00:20:10.589 "sequence_count": 2048, 00:20:10.589 "buf_count": 2048 00:20:10.589 } 00:20:10.589 } 00:20:10.589 ] 00:20:10.589 }, 00:20:10.589 { 00:20:10.589 "subsystem": "bdev", 00:20:10.589 "config": [ 00:20:10.589 { 00:20:10.589 "method": "bdev_set_options", 00:20:10.589 "params": { 00:20:10.589 "bdev_io_pool_size": 65535, 00:20:10.589 "bdev_io_cache_size": 256, 00:20:10.589 "bdev_auto_examine": true, 00:20:10.589 "iobuf_small_cache_size": 128, 00:20:10.589 "iobuf_large_cache_size": 16 00:20:10.589 } 00:20:10.589 }, 00:20:10.589 { 00:20:10.589 "method": "bdev_raid_set_options", 00:20:10.589 "params": { 00:20:10.589 "process_window_size_kb": 1024, 00:20:10.589 "process_max_bandwidth_mb_sec": 0 00:20:10.589 } 00:20:10.589 }, 00:20:10.589 { 00:20:10.589 "method": "bdev_iscsi_set_options", 00:20:10.589 "params": { 00:20:10.589 "timeout_sec": 30 00:20:10.589 } 00:20:10.589 }, 00:20:10.589 { 00:20:10.589 "method": "bdev_nvme_set_options", 00:20:10.589 "params": { 00:20:10.589 "action_on_timeout": "none", 00:20:10.589 "timeout_us": 0, 00:20:10.589 "timeout_admin_us": 0, 00:20:10.589 "keep_alive_timeout_ms": 10000, 00:20:10.589 "arbitration_burst": 0, 00:20:10.589 "low_priority_weight": 0, 00:20:10.589 "medium_priority_weight": 0, 00:20:10.589 "high_priority_weight": 0, 00:20:10.589 "nvme_adminq_poll_period_us": 10000, 00:20:10.590 "nvme_ioq_poll_period_us": 0, 00:20:10.590 "io_queue_requests": 0, 00:20:10.590 "delay_cmd_submit": true, 00:20:10.590 "transport_retry_count": 4, 00:20:10.590 "bdev_retry_count": 3, 00:20:10.590 "transport_ack_timeout": 0, 00:20:10.590 "ctrlr_loss_timeout_sec": 0, 00:20:10.590 "reconnect_delay_sec": 0, 00:20:10.590 "fast_io_fail_timeout_sec": 0, 00:20:10.590 "disable_auto_failback": false, 00:20:10.590 "generate_uuids": false, 00:20:10.590 "transport_tos": 0, 00:20:10.590 "nvme_error_stat": false, 00:20:10.590 "rdma_srq_size": 0, 00:20:10.590 "io_path_stat": false, 00:20:10.590 "allow_accel_sequence": false, 00:20:10.590 "rdma_max_cq_size": 0, 00:20:10.590 "rdma_cm_event_timeout_ms": 0, 00:20:10.590 "dhchap_digests": [ 00:20:10.590 "sha256", 00:20:10.590 "sha384", 00:20:10.590 "sha512" 00:20:10.590 ], 00:20:10.590 "dhchap_dhgroups": [ 00:20:10.590 "null", 00:20:10.590 "ffdhe2048", 00:20:10.590 "ffdhe3072", 00:20:10.590 "ffdhe4096", 00:20:10.590 "ffdhe6144", 00:20:10.590 "ffdhe8192" 00:20:10.590 ] 00:20:10.590 } 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "method": "bdev_nvme_set_hotplug", 00:20:10.590 "params": { 00:20:10.590 "period_us": 100000, 00:20:10.590 "enable": false 00:20:10.590 } 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "method": "bdev_malloc_create", 00:20:10.590 "params": { 00:20:10.590 "name": "malloc0", 00:20:10.590 "num_blocks": 8192, 00:20:10.590 "block_size": 4096, 00:20:10.590 "physical_block_size": 4096, 00:20:10.590 "uuid": "28d53848-95e7-499b-8ff5-d659dabacdc1", 00:20:10.590 "optimal_io_boundary": 0, 00:20:10.590 "md_size": 0, 00:20:10.590 "dif_type": 0, 00:20:10.590 "dif_is_head_of_md": false, 00:20:10.590 "dif_pi_format": 0 00:20:10.590 } 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "method": "bdev_wait_for_examine" 00:20:10.590 } 00:20:10.590 ] 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "subsystem": "nbd", 00:20:10.590 "config": [] 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "subsystem": "scheduler", 00:20:10.590 "config": [ 00:20:10.590 { 00:20:10.590 "method": "framework_set_scheduler", 00:20:10.590 "params": { 00:20:10.590 "name": "static" 00:20:10.590 } 00:20:10.590 } 00:20:10.590 ] 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "subsystem": "nvmf", 00:20:10.590 "config": [ 00:20:10.590 { 00:20:10.590 "method": "nvmf_set_config", 00:20:10.590 "params": { 00:20:10.590 "discovery_filter": "match_any", 00:20:10.590 "admin_cmd_passthru": { 00:20:10.590 "identify_ctrlr": false 00:20:10.590 }, 00:20:10.590 "dhchap_digests": [ 00:20:10.590 "sha256", 00:20:10.590 "sha384", 00:20:10.590 "sha512" 00:20:10.590 ], 00:20:10.590 "dhchap_dhgroups": [ 00:20:10.590 "null", 00:20:10.590 "ffdhe2048", 00:20:10.590 "ffdhe3072", 00:20:10.590 "ffdhe4096", 00:20:10.590 "ffdhe6144", 00:20:10.590 "ffdhe8192" 00:20:10.590 ] 00:20:10.590 } 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "method": "nvmf_set_max_subsystems", 00:20:10.590 "params": { 00:20:10.590 "max_subsystems": 1024 00:20:10.590 } 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "method": "nvmf_set_crdt", 00:20:10.590 "params": { 00:20:10.590 "crdt1": 0, 00:20:10.590 "crdt2": 0, 00:20:10.590 "crdt3": 0 00:20:10.590 } 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "method": "nvmf_create_transport", 00:20:10.590 "params": { 00:20:10.590 "trtype": "TCP", 00:20:10.590 "max_queue_depth": 128, 00:20:10.590 "max_io_qpairs_per_ctrlr": 127, 00:20:10.590 "in_capsule_data_size": 4096, 00:20:10.590 "max_io_size": 131072, 00:20:10.590 "io_unit_size": 131072, 00:20:10.590 "max_aq_depth": 128, 00:20:10.590 "num_shared_buffers": 511, 00:20:10.590 "buf_cache_size": 4294967295, 00:20:10.590 "dif_insert_or_strip": false, 00:20:10.590 "zcopy": false, 00:20:10.590 "c2h_success": false, 00:20:10.590 "sock_priority": 0, 00:20:10.590 "abort_timeout_sec": 1, 00:20:10.590 "ack_timeout": 0, 00:20:10.590 "data_wr_pool_size": 0 00:20:10.590 } 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "method": "nvmf_create_subsystem", 00:20:10.590 "params": { 00:20:10.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.590 "allow_any_host": false, 00:20:10.590 "serial_number": "00000000000000000000", 00:20:10.590 "model_number": "SPDK bdev Controller", 00:20:10.590 "max_namespaces": 32, 00:20:10.590 "min_cntlid": 1, 00:20:10.590 "max_cntlid": 65519, 00:20:10.590 "ana_reporting": false 00:20:10.590 } 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "method": "nvmf_subsystem_add_host", 00:20:10.590 "params": { 00:20:10.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.590 "host": "nqn.2016-06.io.spdk:host1", 00:20:10.590 "psk": "key0" 00:20:10.590 } 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "method": "nvmf_subsystem_add_ns", 00:20:10.590 "params": { 00:20:10.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.590 "namespace": { 00:20:10.590 "nsid": 1, 00:20:10.590 "bdev_name": "malloc0", 00:20:10.590 "nguid": "28D5384895E7499B8FF5D659DABACDC1", 00:20:10.590 "uuid": "28d53848-95e7-499b-8ff5-d659dabacdc1", 00:20:10.590 "no_auto_visible": false 00:20:10.590 } 00:20:10.590 } 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "method": "nvmf_subsystem_add_listener", 00:20:10.590 "params": { 00:20:10.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.590 "listen_address": { 00:20:10.590 "trtype": "TCP", 00:20:10.590 "adrfam": "IPv4", 00:20:10.590 "traddr": "10.0.0.2", 00:20:10.590 "trsvcid": "4420" 00:20:10.590 }, 00:20:10.590 "secure_channel": false, 00:20:10.590 "sock_impl": "ssl" 00:20:10.590 } 00:20:10.590 } 00:20:10.590 ] 00:20:10.590 } 00:20:10.590 ] 00:20:10.590 }' 00:20:10.590 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:10.849 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:10.849 "subsystems": [ 00:20:10.849 { 00:20:10.849 "subsystem": "keyring", 00:20:10.849 "config": [ 00:20:10.849 { 00:20:10.849 "method": "keyring_file_add_key", 00:20:10.849 "params": { 00:20:10.849 "name": "key0", 00:20:10.849 "path": "/tmp/tmp.VY630U2buX" 00:20:10.849 } 00:20:10.849 } 00:20:10.849 ] 00:20:10.849 }, 00:20:10.849 { 00:20:10.849 "subsystem": "iobuf", 00:20:10.849 "config": [ 00:20:10.849 { 00:20:10.849 "method": "iobuf_set_options", 00:20:10.849 "params": { 00:20:10.849 "small_pool_count": 8192, 00:20:10.849 "large_pool_count": 1024, 00:20:10.849 "small_bufsize": 8192, 00:20:10.849 "large_bufsize": 135168, 00:20:10.849 "enable_numa": false 00:20:10.849 } 00:20:10.849 } 00:20:10.849 ] 00:20:10.849 }, 00:20:10.849 { 00:20:10.849 "subsystem": "sock", 00:20:10.849 "config": [ 00:20:10.849 { 00:20:10.849 "method": "sock_set_default_impl", 00:20:10.849 "params": { 00:20:10.849 "impl_name": "posix" 00:20:10.849 } 00:20:10.849 }, 00:20:10.849 { 00:20:10.849 "method": "sock_impl_set_options", 00:20:10.849 "params": { 00:20:10.849 "impl_name": "ssl", 00:20:10.849 "recv_buf_size": 4096, 00:20:10.849 "send_buf_size": 4096, 00:20:10.849 "enable_recv_pipe": true, 00:20:10.849 "enable_quickack": false, 00:20:10.849 "enable_placement_id": 0, 00:20:10.849 "enable_zerocopy_send_server": true, 00:20:10.849 "enable_zerocopy_send_client": false, 00:20:10.849 "zerocopy_threshold": 0, 00:20:10.849 "tls_version": 0, 00:20:10.849 "enable_ktls": false 00:20:10.849 } 00:20:10.849 }, 00:20:10.849 { 00:20:10.849 "method": "sock_impl_set_options", 00:20:10.849 "params": { 00:20:10.849 "impl_name": "posix", 00:20:10.849 "recv_buf_size": 2097152, 00:20:10.849 "send_buf_size": 2097152, 00:20:10.849 "enable_recv_pipe": true, 00:20:10.849 "enable_quickack": false, 00:20:10.849 "enable_placement_id": 0, 00:20:10.849 "enable_zerocopy_send_server": true, 00:20:10.849 "enable_zerocopy_send_client": false, 00:20:10.849 "zerocopy_threshold": 0, 00:20:10.849 "tls_version": 0, 00:20:10.849 "enable_ktls": false 00:20:10.849 } 00:20:10.849 } 00:20:10.849 ] 00:20:10.849 }, 00:20:10.849 { 00:20:10.849 "subsystem": "vmd", 00:20:10.849 "config": [] 00:20:10.849 }, 00:20:10.849 { 00:20:10.849 "subsystem": "accel", 00:20:10.849 "config": [ 00:20:10.849 { 00:20:10.849 "method": "accel_set_options", 00:20:10.849 "params": { 00:20:10.849 "small_cache_size": 128, 00:20:10.849 "large_cache_size": 16, 00:20:10.849 "task_count": 2048, 00:20:10.849 "sequence_count": 2048, 00:20:10.849 "buf_count": 2048 00:20:10.849 } 00:20:10.849 } 00:20:10.849 ] 00:20:10.849 }, 00:20:10.849 { 00:20:10.849 "subsystem": "bdev", 00:20:10.849 "config": [ 00:20:10.849 { 00:20:10.849 "method": "bdev_set_options", 00:20:10.849 "params": { 00:20:10.849 "bdev_io_pool_size": 65535, 00:20:10.849 "bdev_io_cache_size": 256, 00:20:10.849 "bdev_auto_examine": true, 00:20:10.849 "iobuf_small_cache_size": 128, 00:20:10.849 "iobuf_large_cache_size": 16 00:20:10.849 } 00:20:10.849 }, 00:20:10.849 { 00:20:10.849 "method": "bdev_raid_set_options", 00:20:10.849 "params": { 00:20:10.849 "process_window_size_kb": 1024, 00:20:10.849 "process_max_bandwidth_mb_sec": 0 00:20:10.849 } 00:20:10.849 }, 00:20:10.849 { 00:20:10.849 "method": "bdev_iscsi_set_options", 00:20:10.849 "params": { 00:20:10.849 "timeout_sec": 30 00:20:10.849 } 00:20:10.849 }, 00:20:10.849 { 00:20:10.849 "method": "bdev_nvme_set_options", 00:20:10.849 "params": { 00:20:10.850 "action_on_timeout": "none", 00:20:10.850 "timeout_us": 0, 00:20:10.850 "timeout_admin_us": 0, 00:20:10.850 "keep_alive_timeout_ms": 10000, 00:20:10.850 "arbitration_burst": 0, 00:20:10.850 "low_priority_weight": 0, 00:20:10.850 "medium_priority_weight": 0, 00:20:10.850 "high_priority_weight": 0, 00:20:10.850 "nvme_adminq_poll_period_us": 10000, 00:20:10.850 "nvme_ioq_poll_period_us": 0, 00:20:10.850 "io_queue_requests": 512, 00:20:10.850 "delay_cmd_submit": true, 00:20:10.850 "transport_retry_count": 4, 00:20:10.850 "bdev_retry_count": 3, 00:20:10.850 "transport_ack_timeout": 0, 00:20:10.850 "ctrlr_loss_timeout_sec": 0, 00:20:10.850 "reconnect_delay_sec": 0, 00:20:10.850 "fast_io_fail_timeout_sec": 0, 00:20:10.850 "disable_auto_failback": false, 00:20:10.850 "generate_uuids": false, 00:20:10.850 "transport_tos": 0, 00:20:10.850 "nvme_error_stat": false, 00:20:10.850 "rdma_srq_size": 0, 00:20:10.850 "io_path_stat": false, 00:20:10.850 "allow_accel_sequence": false, 00:20:10.850 "rdma_max_cq_size": 0, 00:20:10.850 "rdma_cm_event_timeout_ms": 0, 00:20:10.850 "dhchap_digests": [ 00:20:10.850 "sha256", 00:20:10.850 "sha384", 00:20:10.850 "sha512" 00:20:10.850 ], 00:20:10.850 "dhchap_dhgroups": [ 00:20:10.850 "null", 00:20:10.850 "ffdhe2048", 00:20:10.850 "ffdhe3072", 00:20:10.850 "ffdhe4096", 00:20:10.850 "ffdhe6144", 00:20:10.850 "ffdhe8192" 00:20:10.850 ] 00:20:10.850 } 00:20:10.850 }, 00:20:10.850 { 00:20:10.850 "method": "bdev_nvme_attach_controller", 00:20:10.850 "params": { 00:20:10.850 "name": "nvme0", 00:20:10.850 "trtype": "TCP", 00:20:10.850 "adrfam": "IPv4", 00:20:10.850 "traddr": "10.0.0.2", 00:20:10.850 "trsvcid": "4420", 00:20:10.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.850 "prchk_reftag": false, 00:20:10.850 "prchk_guard": false, 00:20:10.850 "ctrlr_loss_timeout_sec": 0, 00:20:10.850 "reconnect_delay_sec": 0, 00:20:10.850 "fast_io_fail_timeout_sec": 0, 00:20:10.850 "psk": "key0", 00:20:10.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.850 "hdgst": false, 00:20:10.850 "ddgst": false, 00:20:10.850 "multipath": "multipath" 00:20:10.850 } 00:20:10.850 }, 00:20:10.850 { 00:20:10.850 "method": "bdev_nvme_set_hotplug", 00:20:10.850 "params": { 00:20:10.850 "period_us": 100000, 00:20:10.850 "enable": false 00:20:10.850 } 00:20:10.850 }, 00:20:10.850 { 00:20:10.850 "method": "bdev_enable_histogram", 00:20:10.850 "params": { 00:20:10.850 "name": "nvme0n1", 00:20:10.850 "enable": true 00:20:10.850 } 00:20:10.850 }, 00:20:10.850 { 00:20:10.850 "method": "bdev_wait_for_examine" 00:20:10.850 } 00:20:10.850 ] 00:20:10.850 }, 00:20:10.850 { 00:20:10.850 "subsystem": "nbd", 00:20:10.850 "config": [] 00:20:10.850 } 00:20:10.850 ] 00:20:10.850 }' 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1654274 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1654274 ']' 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1654274 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654274 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654274' 00:20:10.850 killing process with pid 1654274 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1654274 00:20:10.850 Received shutdown signal, test time was about 1.000000 seconds 00:20:10.850 00:20:10.850 Latency(us) 00:20:10.850 [2024-12-10T11:28:33.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.850 [2024-12-10T11:28:33.018Z] =================================================================================================================== 00:20:10.850 [2024-12-10T11:28:33.018Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1654274 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1654245 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1654245 ']' 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1654245 00:20:10.850 12:28:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:10.850 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.850 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654245 00:20:11.110 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.110 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.110 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654245' 00:20:11.110 killing process with pid 1654245 00:20:11.110 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1654245 00:20:11.110 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1654245 00:20:11.110 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:11.110 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.110 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.110 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:11.110 "subsystems": [ 00:20:11.110 { 00:20:11.110 "subsystem": "keyring", 00:20:11.110 "config": [ 00:20:11.110 { 00:20:11.110 "method": "keyring_file_add_key", 00:20:11.110 "params": { 00:20:11.110 "name": "key0", 00:20:11.110 "path": "/tmp/tmp.VY630U2buX" 00:20:11.110 } 00:20:11.110 } 00:20:11.110 ] 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "subsystem": "iobuf", 00:20:11.110 "config": [ 00:20:11.110 { 00:20:11.110 "method": "iobuf_set_options", 00:20:11.110 "params": { 00:20:11.110 "small_pool_count": 8192, 00:20:11.110 "large_pool_count": 1024, 00:20:11.110 "small_bufsize": 8192, 00:20:11.110 "large_bufsize": 135168, 00:20:11.110 "enable_numa": false 00:20:11.110 } 00:20:11.110 } 00:20:11.110 ] 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "subsystem": "sock", 00:20:11.110 "config": [ 00:20:11.110 { 00:20:11.110 "method": "sock_set_default_impl", 00:20:11.110 "params": { 00:20:11.110 "impl_name": "posix" 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "sock_impl_set_options", 00:20:11.110 "params": { 00:20:11.110 "impl_name": "ssl", 00:20:11.110 "recv_buf_size": 4096, 00:20:11.110 "send_buf_size": 4096, 00:20:11.110 "enable_recv_pipe": true, 00:20:11.110 "enable_quickack": false, 00:20:11.110 "enable_placement_id": 0, 00:20:11.110 "enable_zerocopy_send_server": true, 00:20:11.110 "enable_zerocopy_send_client": false, 00:20:11.110 "zerocopy_threshold": 0, 00:20:11.110 "tls_version": 0, 00:20:11.110 "enable_ktls": false 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "sock_impl_set_options", 00:20:11.110 "params": { 00:20:11.110 "impl_name": "posix", 00:20:11.110 "recv_buf_size": 2097152, 00:20:11.110 "send_buf_size": 2097152, 00:20:11.110 "enable_recv_pipe": true, 00:20:11.110 "enable_quickack": false, 00:20:11.110 "enable_placement_id": 0, 00:20:11.110 "enable_zerocopy_send_server": true, 00:20:11.110 "enable_zerocopy_send_client": false, 00:20:11.110 "zerocopy_threshold": 0, 00:20:11.110 "tls_version": 0, 00:20:11.110 "enable_ktls": false 00:20:11.110 } 00:20:11.110 } 00:20:11.110 ] 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "subsystem": "vmd", 00:20:11.110 "config": [] 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "subsystem": "accel", 00:20:11.110 "config": [ 00:20:11.110 { 00:20:11.110 "method": "accel_set_options", 00:20:11.110 "params": { 00:20:11.110 "small_cache_size": 128, 00:20:11.110 "large_cache_size": 16, 00:20:11.110 "task_count": 2048, 00:20:11.110 "sequence_count": 2048, 00:20:11.110 "buf_count": 2048 00:20:11.110 } 00:20:11.110 } 00:20:11.110 ] 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "subsystem": "bdev", 00:20:11.110 "config": [ 00:20:11.110 { 00:20:11.110 "method": "bdev_set_options", 00:20:11.110 "params": { 00:20:11.110 "bdev_io_pool_size": 65535, 00:20:11.110 "bdev_io_cache_size": 256, 00:20:11.110 "bdev_auto_examine": true, 00:20:11.110 "iobuf_small_cache_size": 128, 00:20:11.110 "iobuf_large_cache_size": 16 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "bdev_raid_set_options", 00:20:11.110 "params": { 00:20:11.110 "process_window_size_kb": 1024, 00:20:11.110 "process_max_bandwidth_mb_sec": 0 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "bdev_iscsi_set_options", 00:20:11.110 "params": { 00:20:11.110 "timeout_sec": 30 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "bdev_nvme_set_options", 00:20:11.110 "params": { 00:20:11.110 "action_on_timeout": "none", 00:20:11.110 "timeout_us": 0, 00:20:11.110 "timeout_admin_us": 0, 00:20:11.110 "keep_alive_timeout_ms": 10000, 00:20:11.110 "arbitration_burst": 0, 00:20:11.110 "low_priority_weight": 0, 00:20:11.110 "medium_priority_weight": 0, 00:20:11.110 "high_priority_weight": 0, 00:20:11.110 "nvme_adminq_poll_period_us": 10000, 00:20:11.110 "nvme_ioq_poll_period_us": 0, 00:20:11.110 "io_queue_requests": 0, 00:20:11.110 "delay_cmd_submit": true, 00:20:11.110 "transport_retry_count": 4, 00:20:11.110 "bdev_retry_count": 3, 00:20:11.110 "transport_ack_timeout": 0, 00:20:11.110 "ctrlr_loss_timeout_sec": 0, 00:20:11.110 "reconnect_delay_sec": 0, 00:20:11.110 "fast_io_fail_timeout_sec": 0, 00:20:11.110 "disable_auto_failback": false, 00:20:11.110 "generate_uuids": false, 00:20:11.110 "transport_tos": 0, 00:20:11.110 "nvme_error_stat": false, 00:20:11.110 "rdma_srq_size": 0, 00:20:11.110 "io_path_stat": false, 00:20:11.110 "allow_accel_sequence": false, 00:20:11.110 "rdma_max_cq_size": 0, 00:20:11.110 "rdma_cm_event_timeout_ms": 0, 00:20:11.110 "dhchap_digests": [ 00:20:11.110 "sha256", 00:20:11.110 "sha384", 00:20:11.110 "sha512" 00:20:11.110 ], 00:20:11.110 "dhchap_dhgroups": [ 00:20:11.110 "null", 00:20:11.110 "ffdhe2048", 00:20:11.110 "ffdhe3072", 00:20:11.110 "ffdhe4096", 00:20:11.110 "ffdhe6144", 00:20:11.110 "ffdhe8192" 00:20:11.110 ] 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "bdev_nvme_set_hotplug", 00:20:11.110 "params": { 00:20:11.110 "period_us": 100000, 00:20:11.110 "enable": false 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "bdev_malloc_create", 00:20:11.110 "params": { 00:20:11.110 "name": "malloc0", 00:20:11.110 "num_blocks": 8192, 00:20:11.110 "block_size": 4096, 00:20:11.110 "physical_block_size": 4096, 00:20:11.110 "uuid": "28d53848-95e7-499b-8ff5-d659dabacdc1", 00:20:11.110 "optimal_io_boundary": 0, 00:20:11.110 "md_size": 0, 00:20:11.110 "dif_type": 0, 00:20:11.110 "dif_is_head_of_md": false, 00:20:11.110 "dif_pi_format": 0 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "bdev_wait_for_examine" 00:20:11.110 } 00:20:11.110 ] 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "subsystem": "nbd", 00:20:11.110 "config": [] 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "subsystem": "scheduler", 00:20:11.110 "config": [ 00:20:11.110 { 00:20:11.110 "method": "framework_set_scheduler", 00:20:11.110 "params": { 00:20:11.110 "name": "static" 00:20:11.110 } 00:20:11.110 } 00:20:11.110 ] 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "subsystem": "nvmf", 00:20:11.110 "config": [ 00:20:11.110 { 00:20:11.110 "method": "nvmf_set_config", 00:20:11.110 "params": { 00:20:11.110 "discovery_filter": "match_any", 00:20:11.110 "admin_cmd_passthru": { 00:20:11.110 "identify_ctrlr": false 00:20:11.110 }, 00:20:11.110 "dhchap_digests": [ 00:20:11.110 "sha256", 00:20:11.110 "sha384", 00:20:11.110 "sha512" 00:20:11.110 ], 00:20:11.110 "dhchap_dhgroups": [ 00:20:11.110 "null", 00:20:11.110 "ffdhe2048", 00:20:11.110 "ffdhe3072", 00:20:11.110 "ffdhe4096", 00:20:11.110 "ffdhe6144", 00:20:11.110 "ffdhe8192" 00:20:11.110 ] 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "nvmf_set_max_subsystems", 00:20:11.110 "params": { 00:20:11.110 "max_subsystems": 1024 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "nvmf_set_crdt", 00:20:11.110 "params": { 00:20:11.110 "crdt1": 0, 00:20:11.110 "crdt2": 0, 00:20:11.110 "crdt3": 0 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "nvmf_create_transport", 00:20:11.110 "params": { 00:20:11.110 "trtype": "TCP", 00:20:11.110 "max_queue_depth": 128, 00:20:11.110 "max_io_qpairs_per_ctrlr": 127, 00:20:11.110 "in_capsule_data_size": 4096, 00:20:11.110 "max_io_size": 131072, 00:20:11.110 "io_unit_size": 131072, 00:20:11.110 "max_aq_depth": 128, 00:20:11.110 "num_shared_buffers": 511, 00:20:11.110 "buf_cache_size": 4294967295, 00:20:11.110 "dif_insert_or_strip": false, 00:20:11.110 "zcopy": false, 00:20:11.110 "c2h_success": false, 00:20:11.110 "sock_priority": 0, 00:20:11.110 "abort_timeout_sec": 1, 00:20:11.110 "ack_timeout": 0, 00:20:11.110 "data_wr_pool_size": 0 00:20:11.110 } 00:20:11.110 }, 00:20:11.110 { 00:20:11.110 "method": "nvmf_create_subsystem", 00:20:11.110 "params": { 00:20:11.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.110 "allow_any_host": false, 00:20:11.110 "serial_number": "00000000000000000000", 00:20:11.111 "model_number": "SPDK bdev Controller", 00:20:11.111 "max_namespaces": 32, 00:20:11.111 "min_cntlid": 1, 00:20:11.111 "max_cntlid": 65519, 00:20:11.111 "ana_reporting": false 00:20:11.111 } 00:20:11.111 }, 00:20:11.111 { 00:20:11.111 "method": "nvmf_subsystem_add_host", 00:20:11.111 "params": { 00:20:11.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.111 "host": "nqn.2016-06.io.spdk:host1", 00:20:11.111 "psk": "key0" 00:20:11.111 } 00:20:11.111 }, 00:20:11.111 { 00:20:11.111 "method": "nvmf_subsystem_add_ns", 00:20:11.111 "params": { 00:20:11.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.111 "namespace": { 00:20:11.111 "nsid": 1, 00:20:11.111 "bdev_name": "malloc0", 00:20:11.111 "nguid": "28D5384895E7499B8FF5D659DABACDC1", 00:20:11.111 "uuid": "28d53848-95e7-499b-8ff5-d659dabacdc1", 00:20:11.111 "no_auto_visible": false 00:20:11.111 } 00:20:11.111 } 00:20:11.111 }, 00:20:11.111 { 00:20:11.111 "method": "nvmf_subsystem_add_listener", 00:20:11.111 "params": { 00:20:11.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.111 "listen_address": { 00:20:11.111 "trtype": "TCP", 00:20:11.111 "adrfam": "IPv4", 00:20:11.111 "traddr": "10.0.0.2", 00:20:11.111 "trsvcid": "4420" 00:20:11.111 }, 00:20:11.111 "secure_channel": false, 00:20:11.111 "sock_impl": "ssl" 00:20:11.111 } 00:20:11.111 } 00:20:11.111 ] 00:20:11.111 } 00:20:11.111 ] 00:20:11.111 }' 00:20:11.111 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.111 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1654742 00:20:11.111 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:11.111 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1654742 00:20:11.111 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1654742 ']' 00:20:11.111 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.111 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.111 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.111 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.111 12:28:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.111 [2024-12-10 12:28:33.274433] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:11.111 [2024-12-10 12:28:33.274481] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.370 [2024-12-10 12:28:33.354557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.370 [2024-12-10 12:28:33.392622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.370 [2024-12-10 12:28:33.392657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.370 [2024-12-10 12:28:33.392664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.370 [2024-12-10 12:28:33.392670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.370 [2024-12-10 12:28:33.392674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.370 [2024-12-10 12:28:33.393228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.629 [2024-12-10 12:28:33.607291] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.629 [2024-12-10 12:28:33.639318] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.629 [2024-12-10 12:28:33.639536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1654987 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1654987 /var/tmp/bdevperf.sock 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1654987 ']' 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.196 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:12.196 "subsystems": [ 00:20:12.196 { 00:20:12.196 "subsystem": "keyring", 00:20:12.196 "config": [ 00:20:12.196 { 00:20:12.196 "method": "keyring_file_add_key", 00:20:12.196 "params": { 00:20:12.196 "name": "key0", 00:20:12.196 "path": "/tmp/tmp.VY630U2buX" 00:20:12.196 } 00:20:12.196 } 00:20:12.196 ] 00:20:12.196 }, 00:20:12.196 { 00:20:12.196 "subsystem": "iobuf", 00:20:12.196 "config": [ 00:20:12.196 { 00:20:12.196 "method": "iobuf_set_options", 00:20:12.196 "params": { 00:20:12.196 "small_pool_count": 8192, 00:20:12.196 "large_pool_count": 1024, 00:20:12.196 "small_bufsize": 8192, 00:20:12.196 "large_bufsize": 135168, 00:20:12.196 "enable_numa": false 00:20:12.196 } 00:20:12.196 } 00:20:12.196 ] 00:20:12.196 }, 00:20:12.196 { 00:20:12.196 "subsystem": "sock", 00:20:12.196 "config": [ 00:20:12.196 { 00:20:12.196 "method": "sock_set_default_impl", 00:20:12.196 "params": { 00:20:12.196 "impl_name": "posix" 00:20:12.196 } 00:20:12.196 }, 00:20:12.196 { 00:20:12.196 "method": "sock_impl_set_options", 00:20:12.196 "params": { 00:20:12.196 "impl_name": "ssl", 00:20:12.196 "recv_buf_size": 4096, 00:20:12.196 "send_buf_size": 4096, 00:20:12.196 "enable_recv_pipe": true, 00:20:12.196 "enable_quickack": false, 00:20:12.196 "enable_placement_id": 0, 00:20:12.196 "enable_zerocopy_send_server": true, 00:20:12.196 "enable_zerocopy_send_client": false, 00:20:12.196 "zerocopy_threshold": 0, 00:20:12.196 "tls_version": 0, 00:20:12.196 "enable_ktls": false 00:20:12.196 } 00:20:12.196 }, 00:20:12.196 { 00:20:12.196 "method": "sock_impl_set_options", 00:20:12.196 "params": { 00:20:12.196 "impl_name": "posix", 00:20:12.196 "recv_buf_size": 2097152, 00:20:12.196 "send_buf_size": 2097152, 00:20:12.196 "enable_recv_pipe": true, 00:20:12.196 "enable_quickack": false, 00:20:12.196 "enable_placement_id": 0, 00:20:12.196 "enable_zerocopy_send_server": true, 00:20:12.196 "enable_zerocopy_send_client": false, 00:20:12.196 "zerocopy_threshold": 0, 00:20:12.196 "tls_version": 0, 00:20:12.196 "enable_ktls": false 00:20:12.196 } 00:20:12.196 } 00:20:12.196 ] 00:20:12.196 }, 00:20:12.196 { 00:20:12.197 "subsystem": "vmd", 00:20:12.197 "config": [] 00:20:12.197 }, 00:20:12.197 { 00:20:12.197 "subsystem": "accel", 00:20:12.197 "config": [ 00:20:12.197 { 00:20:12.197 "method": "accel_set_options", 00:20:12.197 "params": { 00:20:12.197 "small_cache_size": 128, 00:20:12.197 "large_cache_size": 16, 00:20:12.197 "task_count": 2048, 00:20:12.197 "sequence_count": 2048, 00:20:12.197 "buf_count": 2048 00:20:12.197 } 00:20:12.197 } 00:20:12.197 ] 00:20:12.197 }, 00:20:12.197 { 00:20:12.197 "subsystem": "bdev", 00:20:12.197 "config": [ 00:20:12.197 { 00:20:12.197 "method": "bdev_set_options", 00:20:12.197 "params": { 00:20:12.197 "bdev_io_pool_size": 65535, 00:20:12.197 "bdev_io_cache_size": 256, 00:20:12.197 "bdev_auto_examine": true, 00:20:12.197 "iobuf_small_cache_size": 128, 00:20:12.197 "iobuf_large_cache_size": 16 00:20:12.197 } 00:20:12.197 }, 00:20:12.197 { 00:20:12.197 "method": "bdev_raid_set_options", 00:20:12.197 "params": { 00:20:12.197 "process_window_size_kb": 1024, 00:20:12.197 "process_max_bandwidth_mb_sec": 0 00:20:12.197 } 00:20:12.197 }, 00:20:12.197 { 00:20:12.197 "method": "bdev_iscsi_set_options", 00:20:12.197 "params": { 00:20:12.197 "timeout_sec": 30 00:20:12.197 } 00:20:12.197 }, 00:20:12.197 { 00:20:12.197 "method": "bdev_nvme_set_options", 00:20:12.197 "params": { 00:20:12.197 "action_on_timeout": "none", 00:20:12.197 "timeout_us": 0, 00:20:12.197 "timeout_admin_us": 0, 00:20:12.197 "keep_alive_timeout_ms": 10000, 00:20:12.197 "arbitration_burst": 0, 00:20:12.197 "low_priority_weight": 0, 00:20:12.197 "medium_priority_weight": 0, 00:20:12.197 "high_priority_weight": 0, 00:20:12.197 "nvme_adminq_poll_period_us": 10000, 00:20:12.197 "nvme_ioq_poll_period_us": 0, 00:20:12.197 "io_queue_requests": 512, 00:20:12.197 "delay_cmd_submit": true, 00:20:12.197 "transport_retry_count": 4, 00:20:12.197 "bdev_retry_count": 3, 00:20:12.197 "transport_ack_timeout": 0, 00:20:12.197 "ctrlr_loss_timeout_sec": 0, 00:20:12.197 "reconnect_delay_sec": 0, 00:20:12.197 "fast_io_fail_timeout_sec": 0, 00:20:12.197 "disable_auto_failback": false, 00:20:12.197 "generate_uuids": false, 00:20:12.197 "transport_tos": 0, 00:20:12.197 "nvme_error_stat": false, 00:20:12.197 "rdma_srq_size": 0, 00:20:12.197 "io_path_stat": false, 00:20:12.197 "allow_accel_sequence": false, 00:20:12.197 "rdma_max_cq_size": 0, 00:20:12.197 "rdma_cm_event_timeout_ms": 0, 00:20:12.197 "dhchap_digests": [ 00:20:12.197 "sha256", 00:20:12.197 "sha384", 00:20:12.197 "sha512" 00:20:12.197 ], 00:20:12.197 "dhchap_dhgroups": [ 00:20:12.197 "null", 00:20:12.197 "ffdhe2048", 00:20:12.197 "ffdhe3072", 00:20:12.197 "ffdhe4096", 00:20:12.197 "ffdhe6144", 00:20:12.197 "ffdhe8192" 00:20:12.197 ] 00:20:12.197 } 00:20:12.197 }, 00:20:12.197 { 00:20:12.197 "method": "bdev_nvme_attach_controller", 00:20:12.197 "params": { 00:20:12.197 "name": "nvme0", 00:20:12.197 "trtype": "TCP", 00:20:12.197 "adrfam": "IPv4", 00:20:12.197 "traddr": "10.0.0.2", 00:20:12.197 "trsvcid": "4420", 00:20:12.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.197 "prchk_reftag": false, 00:20:12.197 "prchk_guard": false, 00:20:12.197 "ctrlr_loss_timeout_sec": 0, 00:20:12.197 "reconnect_delay_sec": 0, 00:20:12.197 "fast_io_fail_timeout_sec": 0, 00:20:12.197 "psk": "key0", 00:20:12.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.197 "hdgst": false, 00:20:12.197 "ddgst": false, 00:20:12.197 "multipath": "multipath" 00:20:12.197 } 00:20:12.197 }, 00:20:12.197 { 00:20:12.197 "method": "bdev_nvme_set_hotplug", 00:20:12.197 "params": { 00:20:12.197 "period_us": 100000, 00:20:12.197 "enable": false 00:20:12.197 } 00:20:12.197 }, 00:20:12.197 { 00:20:12.197 "method": "bdev_enable_histogram", 00:20:12.197 "params": { 00:20:12.197 "name": "nvme0n1", 00:20:12.197 "enable": true 00:20:12.197 } 00:20:12.197 }, 00:20:12.197 { 00:20:12.197 "method": "bdev_wait_for_examine" 00:20:12.197 } 00:20:12.197 ] 00:20:12.197 }, 00:20:12.197 { 00:20:12.197 "subsystem": "nbd", 00:20:12.197 "config": [] 00:20:12.197 } 00:20:12.197 ] 00:20:12.197 }' 00:20:12.197 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:12.197 12:28:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.197 [2024-12-10 12:28:34.201389] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:12.197 [2024-12-10 12:28:34.201438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654987 ] 00:20:12.197 [2024-12-10 12:28:34.275752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.197 [2024-12-10 12:28:34.315660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.455 [2024-12-10 12:28:34.469154] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.020 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.020 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:13.020 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:13.020 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:13.278 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.278 12:28:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:13.278 Running I/O for 1 seconds... 00:20:14.212 5293.00 IOPS, 20.68 MiB/s 00:20:14.212 Latency(us) 00:20:14.212 [2024-12-10T11:28:36.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.212 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:14.212 Verification LBA range: start 0x0 length 0x2000 00:20:14.212 nvme0n1 : 1.02 5321.34 20.79 0.00 0.00 23858.29 8206.25 22567.18 00:20:14.212 [2024-12-10T11:28:36.380Z] =================================================================================================================== 00:20:14.212 [2024-12-10T11:28:36.380Z] Total : 5321.34 20.79 0.00 0.00 23858.29 8206.25 22567.18 00:20:14.212 { 00:20:14.212 "results": [ 00:20:14.212 { 00:20:14.212 "job": "nvme0n1", 00:20:14.212 "core_mask": "0x2", 00:20:14.212 "workload": "verify", 00:20:14.212 "status": "finished", 00:20:14.212 "verify_range": { 00:20:14.212 "start": 0, 00:20:14.212 "length": 8192 00:20:14.212 }, 00:20:14.212 "queue_depth": 128, 00:20:14.212 "io_size": 4096, 00:20:14.212 "runtime": 1.018728, 00:20:14.212 "iops": 5321.341908733244, 00:20:14.212 "mibps": 20.786491830989235, 00:20:14.212 "io_failed": 0, 00:20:14.212 "io_timeout": 0, 00:20:14.212 "avg_latency_us": 23858.28821186529, 00:20:14.212 "min_latency_us": 8206.24695652174, 00:20:14.212 "max_latency_us": 22567.179130434783 00:20:14.212 } 00:20:14.212 ], 00:20:14.212 "core_count": 1 00:20:14.212 } 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:14.471 nvmf_trace.0 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1654987 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1654987 ']' 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1654987 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654987 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654987' 00:20:14.471 killing process with pid 1654987 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1654987 00:20:14.471 Received shutdown signal, test time was about 1.000000 seconds 00:20:14.471 00:20:14.471 Latency(us) 00:20:14.471 [2024-12-10T11:28:36.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.471 [2024-12-10T11:28:36.639Z] =================================================================================================================== 00:20:14.471 [2024-12-10T11:28:36.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.471 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1654987 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:14.730 rmmod nvme_tcp 00:20:14.730 rmmod nvme_fabrics 00:20:14.730 rmmod nvme_keyring 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1654742 ']' 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1654742 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1654742 ']' 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1654742 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654742 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:14.730 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654742' 00:20:14.731 killing process with pid 1654742 00:20:14.731 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1654742 00:20:14.731 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1654742 00:20:14.989 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:14.989 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:14.989 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:14.989 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:14.989 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:14.989 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:14.989 12:28:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:14.990 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:14.990 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:14.990 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.990 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.990 12:28:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:16.896 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZoeGTnqxPY /tmp/tmp.naYql9N8NJ /tmp/tmp.VY630U2buX 00:20:17.155 00:20:17.155 real 1m19.752s 00:20:17.155 user 2m2.805s 00:20:17.155 sys 0m29.950s 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 ************************************ 00:20:17.155 END TEST nvmf_tls 00:20:17.155 ************************************ 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 ************************************ 00:20:17.155 START TEST nvmf_fips 00:20:17.155 ************************************ 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:17.155 * Looking for test storage... 00:20:17.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.155 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:17.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.415 --rc genhtml_branch_coverage=1 00:20:17.415 --rc genhtml_function_coverage=1 00:20:17.415 --rc genhtml_legend=1 00:20:17.415 --rc geninfo_all_blocks=1 00:20:17.415 --rc geninfo_unexecuted_blocks=1 00:20:17.415 00:20:17.415 ' 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:17.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.415 --rc genhtml_branch_coverage=1 00:20:17.415 --rc genhtml_function_coverage=1 00:20:17.415 --rc genhtml_legend=1 00:20:17.415 --rc geninfo_all_blocks=1 00:20:17.415 --rc geninfo_unexecuted_blocks=1 00:20:17.415 00:20:17.415 ' 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:17.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.415 --rc genhtml_branch_coverage=1 00:20:17.415 --rc genhtml_function_coverage=1 00:20:17.415 --rc genhtml_legend=1 00:20:17.415 --rc geninfo_all_blocks=1 00:20:17.415 --rc geninfo_unexecuted_blocks=1 00:20:17.415 00:20:17.415 ' 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:17.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.415 --rc genhtml_branch_coverage=1 00:20:17.415 --rc genhtml_function_coverage=1 00:20:17.415 --rc genhtml_legend=1 00:20:17.415 --rc geninfo_all_blocks=1 00:20:17.415 --rc geninfo_unexecuted_blocks=1 00:20:17.415 00:20:17.415 ' 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.415 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:20:17.416 Error setting digest 00:20:17.416 40020E01B97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:17.416 40020E01B97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.416 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.417 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.417 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:17.417 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:17.417 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:17.417 12:28:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:23.988 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:23.988 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:23.988 Found net devices under 0000:86:00.0: cvl_0_0 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:23.988 Found net devices under 0000:86:00.1: cvl_0_1 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:23.988 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:23.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:23.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:20:23.988 00:20:23.988 --- 10.0.0.2 ping statistics --- 00:20:23.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.989 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:23.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:23.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:20:23.989 00:20:23.989 --- 10.0.0.1 ping statistics --- 00:20:23.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:23.989 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1659013 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1659013 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1659013 ']' 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.989 12:28:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:23.989 [2024-12-10 12:28:45.484017] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:23.989 [2024-12-10 12:28:45.484059] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.989 [2024-12-10 12:28:45.558983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.989 [2024-12-10 12:28:45.598516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.989 [2024-12-10 12:28:45.598552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.989 [2024-12-10 12:28:45.598559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.989 [2024-12-10 12:28:45.598565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.989 [2024-12-10 12:28:45.598570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.989 [2024-12-10 12:28:45.599146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.wch 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.wch 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.wch 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.wch 00:20:24.247 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:20:24.505 [2024-12-10 12:28:46.536963] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.505 [2024-12-10 12:28:46.552965] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.505 [2024-12-10 12:28:46.553137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.505 malloc0 00:20:24.505 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.505 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1659146 00:20:24.505 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.505 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1659146 /var/tmp/bdevperf.sock 00:20:24.505 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1659146 ']' 00:20:24.505 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.505 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.505 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.505 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.505 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.762 [2024-12-10 12:28:46.682153] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:24.762 [2024-12-10 12:28:46.682212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1659146 ] 00:20:24.762 [2024-12-10 12:28:46.759649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.762 [2024-12-10 12:28:46.799647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.762 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.762 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:24.762 12:28:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.wch 00:20:25.020 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:25.278 [2024-12-10 12:28:47.275694] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.278 TLSTESTn1 00:20:25.278 12:28:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:25.535 Running I/O for 10 seconds... 00:20:27.404 5313.00 IOPS, 20.75 MiB/s [2024-12-10T11:28:50.505Z] 5389.00 IOPS, 21.05 MiB/s [2024-12-10T11:28:51.879Z] 5350.33 IOPS, 20.90 MiB/s [2024-12-10T11:28:52.813Z] 5288.75 IOPS, 20.66 MiB/s [2024-12-10T11:28:53.748Z] 5176.20 IOPS, 20.22 MiB/s [2024-12-10T11:28:54.684Z] 5061.83 IOPS, 19.77 MiB/s [2024-12-10T11:28:55.620Z] 4979.86 IOPS, 19.45 MiB/s [2024-12-10T11:28:56.555Z] 4915.88 IOPS, 19.20 MiB/s [2024-12-10T11:28:57.491Z] 4863.22 IOPS, 19.00 MiB/s [2024-12-10T11:28:57.750Z] 4823.40 IOPS, 18.84 MiB/s 00:20:35.582 Latency(us) 00:20:35.582 [2024-12-10T11:28:57.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.582 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.582 Verification LBA range: start 0x0 length 0x2000 00:20:35.582 TLSTESTn1 : 10.03 4822.96 18.84 0.00 0.00 26489.11 5869.75 29861.62 00:20:35.582 [2024-12-10T11:28:57.750Z] =================================================================================================================== 00:20:35.582 [2024-12-10T11:28:57.750Z] Total : 4822.96 18.84 0.00 0.00 26489.11 5869.75 29861.62 00:20:35.582 { 00:20:35.582 "results": [ 00:20:35.582 { 00:20:35.582 "job": "TLSTESTn1", 00:20:35.582 "core_mask": "0x4", 00:20:35.582 "workload": "verify", 00:20:35.582 "status": "finished", 00:20:35.582 "verify_range": { 00:20:35.582 "start": 0, 00:20:35.582 "length": 8192 00:20:35.582 }, 00:20:35.582 "queue_depth": 128, 00:20:35.582 "io_size": 4096, 00:20:35.582 "runtime": 10.027249, 00:20:35.582 "iops": 4822.957921958456, 00:20:35.582 "mibps": 18.839679382650218, 00:20:35.582 "io_failed": 0, 00:20:35.582 "io_timeout": 0, 00:20:35.582 "avg_latency_us": 26489.106338830337, 00:20:35.582 "min_latency_us": 5869.746086956522, 00:20:35.582 "max_latency_us": 29861.620869565217 00:20:35.582 } 00:20:35.582 ], 00:20:35.582 "core_count": 1 00:20:35.582 } 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:35.582 nvmf_trace.0 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1659146 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1659146 ']' 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1659146 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1659146 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1659146' 00:20:35.582 killing process with pid 1659146 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1659146 00:20:35.582 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.582 00:20:35.582 Latency(us) 00:20:35.582 [2024-12-10T11:28:57.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.582 [2024-12-10T11:28:57.750Z] =================================================================================================================== 00:20:35.582 [2024-12-10T11:28:57.750Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.582 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1659146 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:35.842 rmmod nvme_tcp 00:20:35.842 rmmod nvme_fabrics 00:20:35.842 rmmod nvme_keyring 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1659013 ']' 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1659013 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1659013 ']' 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1659013 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1659013 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1659013' 00:20:35.842 killing process with pid 1659013 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1659013 00:20:35.842 12:28:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1659013 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.101 12:28:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.wch 00:20:38.637 00:20:38.637 real 0m21.046s 00:20:38.637 user 0m21.510s 00:20:38.637 sys 0m10.212s 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:38.637 ************************************ 00:20:38.637 END TEST nvmf_fips 00:20:38.637 ************************************ 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:38.637 ************************************ 00:20:38.637 START TEST nvmf_control_msg_list 00:20:38.637 ************************************ 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:38.637 * Looking for test storage... 00:20:38.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:38.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.637 --rc genhtml_branch_coverage=1 00:20:38.637 --rc genhtml_function_coverage=1 00:20:38.637 --rc genhtml_legend=1 00:20:38.637 --rc geninfo_all_blocks=1 00:20:38.637 --rc geninfo_unexecuted_blocks=1 00:20:38.637 00:20:38.637 ' 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:38.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.637 --rc genhtml_branch_coverage=1 00:20:38.637 --rc genhtml_function_coverage=1 00:20:38.637 --rc genhtml_legend=1 00:20:38.637 --rc geninfo_all_blocks=1 00:20:38.637 --rc geninfo_unexecuted_blocks=1 00:20:38.637 00:20:38.637 ' 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:38.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.637 --rc genhtml_branch_coverage=1 00:20:38.637 --rc genhtml_function_coverage=1 00:20:38.637 --rc genhtml_legend=1 00:20:38.637 --rc geninfo_all_blocks=1 00:20:38.637 --rc geninfo_unexecuted_blocks=1 00:20:38.637 00:20:38.637 ' 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:38.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.637 --rc genhtml_branch_coverage=1 00:20:38.637 --rc genhtml_function_coverage=1 00:20:38.637 --rc genhtml_legend=1 00:20:38.637 --rc geninfo_all_blocks=1 00:20:38.637 --rc geninfo_unexecuted_blocks=1 00:20:38.637 00:20:38.637 ' 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.637 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:38.638 12:29:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:45.209 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:45.209 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.209 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:45.210 Found net devices under 0000:86:00.0: cvl_0_0 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:45.210 Found net devices under 0000:86:00.1: cvl_0_1 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:45.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:20:45.210 00:20:45.210 --- 10.0.0.2 ping statistics --- 00:20:45.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.210 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:20:45.210 00:20:45.210 --- 10.0.0.1 ping statistics --- 00:20:45.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.210 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1664422 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1664422 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1664422 ']' 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.210 [2024-12-10 12:29:06.437636] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:45.210 [2024-12-10 12:29:06.437683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.210 [2024-12-10 12:29:06.515882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.210 [2024-12-10 12:29:06.558255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.210 [2024-12-10 12:29:06.558286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.210 [2024-12-10 12:29:06.558293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.210 [2024-12-10 12:29:06.558299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.210 [2024-12-10 12:29:06.558304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.210 [2024-12-10 12:29:06.558847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.210 [2024-12-10 12:29:06.696375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:45.210 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.211 Malloc0 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:45.211 [2024-12-10 12:29:06.744768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1664631 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1664633 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1664634 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1664631 00:20:45.211 12:29:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.211 [2024-12-10 12:29:06.849521] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:45.211 [2024-12-10 12:29:06.849708] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:45.211 [2024-12-10 12:29:06.849860] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:46.145 Initializing NVMe Controllers 00:20:46.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:46.145 Initialization complete. Launching workers. 00:20:46.145 ======================================================== 00:20:46.145 Latency(us) 00:20:46.145 Device Information : IOPS MiB/s Average min max 00:20:46.145 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6969.95 27.23 143.11 132.84 355.14 00:20:46.145 ======================================================== 00:20:46.145 Total : 6969.95 27.23 143.11 132.84 355.14 00:20:46.145 00:20:46.145 Initializing NVMe Controllers 00:20:46.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:46.145 Initialization complete. Launching workers. 00:20:46.145 ======================================================== 00:20:46.145 Latency(us) 00:20:46.145 Device Information : IOPS MiB/s Average min max 00:20:46.145 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41011.43 40571.61 41897.17 00:20:46.145 ======================================================== 00:20:46.145 Total : 25.00 0.10 41011.43 40571.61 41897.17 00:20:46.145 00:20:46.145 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1664633 00:20:46.145 Initializing NVMe Controllers 00:20:46.145 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:46.145 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:46.146 Initialization complete. Launching workers. 00:20:46.146 ======================================================== 00:20:46.146 Latency(us) 00:20:46.146 Device Information : IOPS MiB/s Average min max 00:20:46.146 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41088.69 40421.76 41931.46 00:20:46.146 ======================================================== 00:20:46.146 Total : 25.00 0.10 41088.69 40421.76 41931.46 00:20:46.146 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1664634 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:46.146 rmmod nvme_tcp 00:20:46.146 rmmod nvme_fabrics 00:20:46.146 rmmod nvme_keyring 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1664422 ']' 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1664422 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1664422 ']' 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1664422 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1664422 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1664422' 00:20:46.146 killing process with pid 1664422 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1664422 00:20:46.146 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1664422 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.405 12:29:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:48.942 00:20:48.942 real 0m10.257s 00:20:48.942 user 0m7.060s 00:20:48.942 sys 0m5.547s 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:48.942 ************************************ 00:20:48.942 END TEST nvmf_control_msg_list 00:20:48.942 ************************************ 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.942 ************************************ 00:20:48.942 START TEST nvmf_wait_for_buf 00:20:48.942 ************************************ 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:48.942 * Looking for test storage... 00:20:48.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.942 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:48.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.943 --rc genhtml_branch_coverage=1 00:20:48.943 --rc genhtml_function_coverage=1 00:20:48.943 --rc genhtml_legend=1 00:20:48.943 --rc geninfo_all_blocks=1 00:20:48.943 --rc geninfo_unexecuted_blocks=1 00:20:48.943 00:20:48.943 ' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:48.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.943 --rc genhtml_branch_coverage=1 00:20:48.943 --rc genhtml_function_coverage=1 00:20:48.943 --rc genhtml_legend=1 00:20:48.943 --rc geninfo_all_blocks=1 00:20:48.943 --rc geninfo_unexecuted_blocks=1 00:20:48.943 00:20:48.943 ' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:48.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.943 --rc genhtml_branch_coverage=1 00:20:48.943 --rc genhtml_function_coverage=1 00:20:48.943 --rc genhtml_legend=1 00:20:48.943 --rc geninfo_all_blocks=1 00:20:48.943 --rc geninfo_unexecuted_blocks=1 00:20:48.943 00:20:48.943 ' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:48.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.943 --rc genhtml_branch_coverage=1 00:20:48.943 --rc genhtml_function_coverage=1 00:20:48.943 --rc genhtml_legend=1 00:20:48.943 --rc geninfo_all_blocks=1 00:20:48.943 --rc geninfo_unexecuted_blocks=1 00:20:48.943 00:20:48.943 ' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.943 12:29:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.219 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.478 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:54.479 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:54.479 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:54.479 Found net devices under 0000:86:00.0: cvl_0_0 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:54.479 Found net devices under 0000:86:00.1: cvl_0_1 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:54.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:20:54.479 00:20:54.479 --- 10.0.0.2 ping statistics --- 00:20:54.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.479 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:20:54.479 00:20:54.479 --- 10.0.0.1 ping statistics --- 00:20:54.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.479 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:54.479 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1668340 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1668340 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1668340 ']' 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.738 [2024-12-10 12:29:16.724988] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:20:54.738 [2024-12-10 12:29:16.725034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.738 [2024-12-10 12:29:16.806136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.738 [2024-12-10 12:29:16.844567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.738 [2024-12-10 12:29:16.844602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.738 [2024-12-10 12:29:16.844608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.738 [2024-12-10 12:29:16.844615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.738 [2024-12-10 12:29:16.844621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.738 [2024-12-10 12:29:16.845189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.738 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.997 12:29:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.997 Malloc0 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.997 [2024-12-10 12:29:17.030471] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:54.997 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.998 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:54.998 [2024-12-10 12:29:17.058688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.998 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.998 12:29:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:54.998 [2024-12-10 12:29:17.140237] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:56.899 Initializing NVMe Controllers 00:20:56.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:56.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:56.899 Initialization complete. Launching workers. 00:20:56.899 ======================================================== 00:20:56.899 Latency(us) 00:20:56.899 Device Information : IOPS MiB/s Average min max 00:20:56.899 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.93 15.99 32366.27 7270.51 63850.15 00:20:56.899 ======================================================== 00:20:56.899 Total : 127.93 15.99 32366.27 7270.51 63850.15 00:20:56.899 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:56.899 rmmod nvme_tcp 00:20:56.899 rmmod nvme_fabrics 00:20:56.899 rmmod nvme_keyring 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:56.899 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1668340 ']' 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1668340 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1668340 ']' 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1668340 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1668340 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1668340' 00:20:56.900 killing process with pid 1668340 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1668340 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1668340 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.900 12:29:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.436 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:59.436 00:20:59.436 real 0m10.408s 00:20:59.436 user 0m4.000s 00:20:59.436 sys 0m4.864s 00:20:59.436 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:59.436 12:29:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:59.436 ************************************ 00:20:59.436 END TEST nvmf_wait_for_buf 00:20:59.436 ************************************ 00:20:59.436 12:29:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:59.436 12:29:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:59.436 12:29:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:59.436 12:29:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:59.436 12:29:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:59.436 12:29:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:04.832 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:04.832 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.832 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:04.833 Found net devices under 0000:86:00.0: cvl_0_0 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:04.833 Found net devices under 0000:86:00.1: cvl_0_1 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:04.833 ************************************ 00:21:04.833 START TEST nvmf_perf_adq 00:21:04.833 ************************************ 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:04.833 * Looking for test storage... 00:21:04.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:04.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.833 --rc genhtml_branch_coverage=1 00:21:04.833 --rc genhtml_function_coverage=1 00:21:04.833 --rc genhtml_legend=1 00:21:04.833 --rc geninfo_all_blocks=1 00:21:04.833 --rc geninfo_unexecuted_blocks=1 00:21:04.833 00:21:04.833 ' 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:04.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.833 --rc genhtml_branch_coverage=1 00:21:04.833 --rc genhtml_function_coverage=1 00:21:04.833 --rc genhtml_legend=1 00:21:04.833 --rc geninfo_all_blocks=1 00:21:04.833 --rc geninfo_unexecuted_blocks=1 00:21:04.833 00:21:04.833 ' 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:04.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.833 --rc genhtml_branch_coverage=1 00:21:04.833 --rc genhtml_function_coverage=1 00:21:04.833 --rc genhtml_legend=1 00:21:04.833 --rc geninfo_all_blocks=1 00:21:04.833 --rc geninfo_unexecuted_blocks=1 00:21:04.833 00:21:04.833 ' 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:04.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:04.833 --rc genhtml_branch_coverage=1 00:21:04.833 --rc genhtml_function_coverage=1 00:21:04.833 --rc genhtml_legend=1 00:21:04.833 --rc geninfo_all_blocks=1 00:21:04.833 --rc geninfo_unexecuted_blocks=1 00:21:04.833 00:21:04.833 ' 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.833 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.834 12:29:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:11.430 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:11.430 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.430 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:11.431 Found net devices under 0000:86:00.0: cvl_0_0 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:11.431 Found net devices under 0000:86:00.1: cvl_0_1 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:11.431 12:29:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:11.690 12:29:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:14.229 12:29:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:19.506 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:19.506 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.506 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:19.507 Found net devices under 0000:86:00.0: cvl_0_0 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:19.507 Found net devices under 0000:86:00.1: cvl_0_1 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:21:19.507 00:21:19.507 --- 10.0.0.2 ping statistics --- 00:21:19.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.507 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:21:19.507 00:21:19.507 --- 10.0.0.1 ping statistics --- 00:21:19.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.507 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1676756 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1676756 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1676756 ']' 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.507 [2024-12-10 12:29:41.456552] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:21:19.507 [2024-12-10 12:29:41.456599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.507 [2024-12-10 12:29:41.536359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.507 [2024-12-10 12:29:41.579142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.507 [2024-12-10 12:29:41.579184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.507 [2024-12-10 12:29:41.579192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.507 [2024-12-10 12:29:41.579198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.507 [2024-12-10 12:29:41.579203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.507 [2024-12-10 12:29:41.580763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.507 [2024-12-10 12:29:41.580873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.507 [2024-12-10 12:29:41.580983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.507 [2024-12-10 12:29:41.580984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.507 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.766 [2024-12-10 12:29:41.787538] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.766 Malloc1 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:19.766 [2024-12-10 12:29:41.853773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1676782 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:19.766 12:29:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:22.295 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:22.295 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.295 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.295 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.295 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:22.295 "tick_rate": 2300000000, 00:21:22.295 "poll_groups": [ 00:21:22.295 { 00:21:22.295 "name": "nvmf_tgt_poll_group_000", 00:21:22.295 "admin_qpairs": 1, 00:21:22.295 "io_qpairs": 1, 00:21:22.295 "current_admin_qpairs": 1, 00:21:22.295 "current_io_qpairs": 1, 00:21:22.295 "pending_bdev_io": 0, 00:21:22.295 "completed_nvme_io": 19230, 00:21:22.295 "transports": [ 00:21:22.295 { 00:21:22.295 "trtype": "TCP" 00:21:22.295 } 00:21:22.295 ] 00:21:22.295 }, 00:21:22.295 { 00:21:22.295 "name": "nvmf_tgt_poll_group_001", 00:21:22.295 "admin_qpairs": 0, 00:21:22.295 "io_qpairs": 1, 00:21:22.295 "current_admin_qpairs": 0, 00:21:22.295 "current_io_qpairs": 1, 00:21:22.295 "pending_bdev_io": 0, 00:21:22.295 "completed_nvme_io": 19552, 00:21:22.295 "transports": [ 00:21:22.295 { 00:21:22.295 "trtype": "TCP" 00:21:22.295 } 00:21:22.295 ] 00:21:22.295 }, 00:21:22.295 { 00:21:22.295 "name": "nvmf_tgt_poll_group_002", 00:21:22.295 "admin_qpairs": 0, 00:21:22.295 "io_qpairs": 1, 00:21:22.295 "current_admin_qpairs": 0, 00:21:22.295 "current_io_qpairs": 1, 00:21:22.295 "pending_bdev_io": 0, 00:21:22.295 "completed_nvme_io": 19299, 00:21:22.295 "transports": [ 00:21:22.295 { 00:21:22.295 "trtype": "TCP" 00:21:22.295 } 00:21:22.295 ] 00:21:22.295 }, 00:21:22.295 { 00:21:22.295 "name": "nvmf_tgt_poll_group_003", 00:21:22.295 "admin_qpairs": 0, 00:21:22.295 "io_qpairs": 1, 00:21:22.295 "current_admin_qpairs": 0, 00:21:22.295 "current_io_qpairs": 1, 00:21:22.295 "pending_bdev_io": 0, 00:21:22.295 "completed_nvme_io": 19137, 00:21:22.295 "transports": [ 00:21:22.295 { 00:21:22.295 "trtype": "TCP" 00:21:22.295 } 00:21:22.295 ] 00:21:22.295 } 00:21:22.295 ] 00:21:22.295 }' 00:21:22.295 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:22.295 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:22.295 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:22.295 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:22.295 12:29:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1676782 00:21:30.404 Initializing NVMe Controllers 00:21:30.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:30.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:30.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:30.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:30.404 Initialization complete. Launching workers. 00:21:30.404 ======================================================== 00:21:30.404 Latency(us) 00:21:30.404 Device Information : IOPS MiB/s Average min max 00:21:30.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10250.64 40.04 6256.80 2175.64 44334.91 00:21:30.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10361.54 40.47 6175.57 2293.97 9580.40 00:21:30.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10081.75 39.38 6349.17 2195.84 10953.95 00:21:30.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10236.94 39.99 6251.52 2432.24 9993.33 00:21:30.404 ======================================================== 00:21:30.404 Total : 40930.88 159.89 6257.67 2175.64 44334.91 00:21:30.404 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:30.404 rmmod nvme_tcp 00:21:30.404 rmmod nvme_fabrics 00:21:30.404 rmmod nvme_keyring 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1676756 ']' 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1676756 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1676756 ']' 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1676756 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1676756 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1676756' 00:21:30.404 killing process with pid 1676756 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1676756 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1676756 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.404 12:29:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:32.310 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:32.310 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:32.310 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:32.310 12:29:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:33.688 12:29:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:36.223 12:29:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:41.498 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.498 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:41.499 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:41.499 Found net devices under 0000:86:00.0: cvl_0_0 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:41.499 Found net devices under 0000:86:00.1: cvl_0_1 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.499 12:30:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:41.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:21:41.499 00:21:41.499 --- 10.0.0.2 ping statistics --- 00:21:41.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.499 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:21:41.499 00:21:41.499 --- 10.0.0.1 ping statistics --- 00:21:41.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.499 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:41.499 net.core.busy_poll = 1 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:41.499 net.core.busy_read = 1 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1680693 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1680693 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1680693 ']' 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.499 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.499 [2024-12-10 12:30:03.574979] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:21:41.499 [2024-12-10 12:30:03.575035] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.499 [2024-12-10 12:30:03.654335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.758 [2024-12-10 12:30:03.698556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.758 [2024-12-10 12:30:03.698592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.758 [2024-12-10 12:30:03.698600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.758 [2024-12-10 12:30:03.698606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.758 [2024-12-10 12:30:03.698612] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.758 [2024-12-10 12:30:03.700052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.758 [2024-12-10 12:30:03.700179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.758 [2024-12-10 12:30:03.700246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.758 [2024-12-10 12:30:03.700247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.758 [2024-12-10 12:30:03.915131] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:41.758 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.017 Malloc1 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:42.017 [2024-12-10 12:30:03.980090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1680938 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:42.017 12:30:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:43.917 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:43.917 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.917 12:30:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:43.917 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.917 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:43.917 "tick_rate": 2300000000, 00:21:43.917 "poll_groups": [ 00:21:43.917 { 00:21:43.917 "name": "nvmf_tgt_poll_group_000", 00:21:43.917 "admin_qpairs": 1, 00:21:43.917 "io_qpairs": 4, 00:21:43.917 "current_admin_qpairs": 1, 00:21:43.917 "current_io_qpairs": 4, 00:21:43.917 "pending_bdev_io": 0, 00:21:43.917 "completed_nvme_io": 42568, 00:21:43.917 "transports": [ 00:21:43.917 { 00:21:43.917 "trtype": "TCP" 00:21:43.917 } 00:21:43.917 ] 00:21:43.917 }, 00:21:43.917 { 00:21:43.917 "name": "nvmf_tgt_poll_group_001", 00:21:43.917 "admin_qpairs": 0, 00:21:43.917 "io_qpairs": 0, 00:21:43.917 "current_admin_qpairs": 0, 00:21:43.917 "current_io_qpairs": 0, 00:21:43.917 "pending_bdev_io": 0, 00:21:43.917 "completed_nvme_io": 0, 00:21:43.917 "transports": [ 00:21:43.917 { 00:21:43.917 "trtype": "TCP" 00:21:43.917 } 00:21:43.917 ] 00:21:43.917 }, 00:21:43.917 { 00:21:43.917 "name": "nvmf_tgt_poll_group_002", 00:21:43.917 "admin_qpairs": 0, 00:21:43.917 "io_qpairs": 0, 00:21:43.917 "current_admin_qpairs": 0, 00:21:43.917 "current_io_qpairs": 0, 00:21:43.917 "pending_bdev_io": 0, 00:21:43.917 "completed_nvme_io": 0, 00:21:43.917 "transports": [ 00:21:43.917 { 00:21:43.917 "trtype": "TCP" 00:21:43.917 } 00:21:43.917 ] 00:21:43.917 }, 00:21:43.917 { 00:21:43.917 "name": "nvmf_tgt_poll_group_003", 00:21:43.917 "admin_qpairs": 0, 00:21:43.917 "io_qpairs": 0, 00:21:43.917 "current_admin_qpairs": 0, 00:21:43.917 "current_io_qpairs": 0, 00:21:43.917 "pending_bdev_io": 0, 00:21:43.917 "completed_nvme_io": 0, 00:21:43.917 "transports": [ 00:21:43.917 { 00:21:43.917 "trtype": "TCP" 00:21:43.917 } 00:21:43.917 ] 00:21:43.917 } 00:21:43.917 ] 00:21:43.917 }' 00:21:43.917 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:43.917 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:43.917 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:21:43.917 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:21:43.917 12:30:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1680938 00:21:52.026 Initializing NVMe Controllers 00:21:52.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:52.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:52.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:52.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:52.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:52.026 Initialization complete. Launching workers. 00:21:52.026 ======================================================== 00:21:52.026 Latency(us) 00:21:52.026 Device Information : IOPS MiB/s Average min max 00:21:52.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5711.60 22.31 11208.48 1501.73 56167.50 00:21:52.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5808.70 22.69 11021.34 957.10 56620.15 00:21:52.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5605.00 21.89 11445.41 1485.86 55290.57 00:21:52.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5734.10 22.40 11165.70 1258.31 56077.18 00:21:52.026 ======================================================== 00:21:52.026 Total : 22859.40 89.29 11208.29 957.10 56620.15 00:21:52.026 00:21:52.026 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:52.026 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.026 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:52.026 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.026 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:52.026 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.026 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.026 rmmod nvme_tcp 00:21:52.285 rmmod nvme_fabrics 00:21:52.285 rmmod nvme_keyring 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1680693 ']' 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1680693 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1680693 ']' 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1680693 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1680693 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1680693' 00:21:52.285 killing process with pid 1680693 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1680693 00:21:52.285 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1680693 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.544 12:30:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:55.840 00:21:55.840 real 0m50.913s 00:21:55.840 user 2m44.556s 00:21:55.840 sys 0m9.918s 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:55.840 ************************************ 00:21:55.840 END TEST nvmf_perf_adq 00:21:55.840 ************************************ 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:55.840 ************************************ 00:21:55.840 START TEST nvmf_shutdown 00:21:55.840 ************************************ 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:55.840 * Looking for test storage... 00:21:55.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:55.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.840 --rc genhtml_branch_coverage=1 00:21:55.840 --rc genhtml_function_coverage=1 00:21:55.840 --rc genhtml_legend=1 00:21:55.840 --rc geninfo_all_blocks=1 00:21:55.840 --rc geninfo_unexecuted_blocks=1 00:21:55.840 00:21:55.840 ' 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:55.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.840 --rc genhtml_branch_coverage=1 00:21:55.840 --rc genhtml_function_coverage=1 00:21:55.840 --rc genhtml_legend=1 00:21:55.840 --rc geninfo_all_blocks=1 00:21:55.840 --rc geninfo_unexecuted_blocks=1 00:21:55.840 00:21:55.840 ' 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:55.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.840 --rc genhtml_branch_coverage=1 00:21:55.840 --rc genhtml_function_coverage=1 00:21:55.840 --rc genhtml_legend=1 00:21:55.840 --rc geninfo_all_blocks=1 00:21:55.840 --rc geninfo_unexecuted_blocks=1 00:21:55.840 00:21:55.840 ' 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:55.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:55.840 --rc genhtml_branch_coverage=1 00:21:55.840 --rc genhtml_function_coverage=1 00:21:55.840 --rc genhtml_legend=1 00:21:55.840 --rc geninfo_all_blocks=1 00:21:55.840 --rc geninfo_unexecuted_blocks=1 00:21:55.840 00:21:55.840 ' 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.840 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:55.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:55.841 ************************************ 00:21:55.841 START TEST nvmf_shutdown_tc1 00:21:55.841 ************************************ 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:55.841 12:30:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.411 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.411 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.411 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.411 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.411 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:02.412 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:02.412 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:02.412 Found net devices under 0000:86:00.0: cvl_0_0 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:02.412 Found net devices under 0000:86:00.1: cvl_0_1 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:22:02.412 00:22:02.412 --- 10.0.0.2 ping statistics --- 00:22:02.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.412 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:22:02.412 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:22:02.413 00:22:02.413 --- 10.0.0.1 ping statistics --- 00:22:02.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.413 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1686754 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1686754 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1686754 ']' 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.413 12:30:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.413 [2024-12-10 12:30:23.950697] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:02.413 [2024-12-10 12:30:23.950740] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.413 [2024-12-10 12:30:24.029341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.413 [2024-12-10 12:30:24.071985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.413 [2024-12-10 12:30:24.072022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.413 [2024-12-10 12:30:24.072030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.413 [2024-12-10 12:30:24.072036] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.413 [2024-12-10 12:30:24.072041] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.413 [2024-12-10 12:30:24.073677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.413 [2024-12-10 12:30:24.073806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.413 [2024-12-10 12:30:24.073912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.413 [2024-12-10 12:30:24.073913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.413 [2024-12-10 12:30:24.215840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.413 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.413 Malloc1 00:22:02.413 [2024-12-10 12:30:24.334977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:02.413 Malloc2 00:22:02.413 Malloc3 00:22:02.413 Malloc4 00:22:02.413 Malloc5 00:22:02.413 Malloc6 00:22:02.413 Malloc7 00:22:02.673 Malloc8 00:22:02.673 Malloc9 00:22:02.673 Malloc10 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1686832 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1686832 /var/tmp/bdevperf.sock 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1686832 ']' 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.673 { 00:22:02.673 "params": { 00:22:02.673 "name": "Nvme$subsystem", 00:22:02.673 "trtype": "$TEST_TRANSPORT", 00:22:02.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.673 "adrfam": "ipv4", 00:22:02.673 "trsvcid": "$NVMF_PORT", 00:22:02.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.673 "hdgst": ${hdgst:-false}, 00:22:02.673 "ddgst": ${ddgst:-false} 00:22:02.673 }, 00:22:02.673 "method": "bdev_nvme_attach_controller" 00:22:02.673 } 00:22:02.673 EOF 00:22:02.673 )") 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.673 { 00:22:02.673 "params": { 00:22:02.673 "name": "Nvme$subsystem", 00:22:02.673 "trtype": "$TEST_TRANSPORT", 00:22:02.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.673 "adrfam": "ipv4", 00:22:02.673 "trsvcid": "$NVMF_PORT", 00:22:02.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.673 "hdgst": ${hdgst:-false}, 00:22:02.673 "ddgst": ${ddgst:-false} 00:22:02.673 }, 00:22:02.673 "method": "bdev_nvme_attach_controller" 00:22:02.673 } 00:22:02.673 EOF 00:22:02.673 )") 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.673 { 00:22:02.673 "params": { 00:22:02.673 "name": "Nvme$subsystem", 00:22:02.673 "trtype": "$TEST_TRANSPORT", 00:22:02.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.673 "adrfam": "ipv4", 00:22:02.673 "trsvcid": "$NVMF_PORT", 00:22:02.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.673 "hdgst": ${hdgst:-false}, 00:22:02.673 "ddgst": ${ddgst:-false} 00:22:02.673 }, 00:22:02.673 "method": "bdev_nvme_attach_controller" 00:22:02.673 } 00:22:02.673 EOF 00:22:02.673 )") 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.673 { 00:22:02.673 "params": { 00:22:02.673 "name": "Nvme$subsystem", 00:22:02.673 "trtype": "$TEST_TRANSPORT", 00:22:02.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.673 "adrfam": "ipv4", 00:22:02.673 "trsvcid": "$NVMF_PORT", 00:22:02.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.673 "hdgst": ${hdgst:-false}, 00:22:02.673 "ddgst": ${ddgst:-false} 00:22:02.673 }, 00:22:02.673 "method": "bdev_nvme_attach_controller" 00:22:02.673 } 00:22:02.673 EOF 00:22:02.673 )") 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.673 { 00:22:02.673 "params": { 00:22:02.673 "name": "Nvme$subsystem", 00:22:02.673 "trtype": "$TEST_TRANSPORT", 00:22:02.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.673 "adrfam": "ipv4", 00:22:02.673 "trsvcid": "$NVMF_PORT", 00:22:02.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.673 "hdgst": ${hdgst:-false}, 00:22:02.673 "ddgst": ${ddgst:-false} 00:22:02.673 }, 00:22:02.673 "method": "bdev_nvme_attach_controller" 00:22:02.673 } 00:22:02.673 EOF 00:22:02.673 )") 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.673 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.673 { 00:22:02.673 "params": { 00:22:02.673 "name": "Nvme$subsystem", 00:22:02.673 "trtype": "$TEST_TRANSPORT", 00:22:02.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.674 "adrfam": "ipv4", 00:22:02.674 "trsvcid": "$NVMF_PORT", 00:22:02.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.674 "hdgst": ${hdgst:-false}, 00:22:02.674 "ddgst": ${ddgst:-false} 00:22:02.674 }, 00:22:02.674 "method": "bdev_nvme_attach_controller" 00:22:02.674 } 00:22:02.674 EOF 00:22:02.674 )") 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.674 { 00:22:02.674 "params": { 00:22:02.674 "name": "Nvme$subsystem", 00:22:02.674 "trtype": "$TEST_TRANSPORT", 00:22:02.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.674 "adrfam": "ipv4", 00:22:02.674 "trsvcid": "$NVMF_PORT", 00:22:02.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.674 "hdgst": ${hdgst:-false}, 00:22:02.674 "ddgst": ${ddgst:-false} 00:22:02.674 }, 00:22:02.674 "method": "bdev_nvme_attach_controller" 00:22:02.674 } 00:22:02.674 EOF 00:22:02.674 )") 00:22:02.674 [2024-12-10 12:30:24.813620] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:02.674 [2024-12-10 12:30:24.813664] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.674 { 00:22:02.674 "params": { 00:22:02.674 "name": "Nvme$subsystem", 00:22:02.674 "trtype": "$TEST_TRANSPORT", 00:22:02.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.674 "adrfam": "ipv4", 00:22:02.674 "trsvcid": "$NVMF_PORT", 00:22:02.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.674 "hdgst": ${hdgst:-false}, 00:22:02.674 "ddgst": ${ddgst:-false} 00:22:02.674 }, 00:22:02.674 "method": "bdev_nvme_attach_controller" 00:22:02.674 } 00:22:02.674 EOF 00:22:02.674 )") 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.674 { 00:22:02.674 "params": { 00:22:02.674 "name": "Nvme$subsystem", 00:22:02.674 "trtype": "$TEST_TRANSPORT", 00:22:02.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.674 "adrfam": "ipv4", 00:22:02.674 "trsvcid": "$NVMF_PORT", 00:22:02.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.674 "hdgst": ${hdgst:-false}, 00:22:02.674 "ddgst": ${ddgst:-false} 00:22:02.674 }, 00:22:02.674 "method": "bdev_nvme_attach_controller" 00:22:02.674 } 00:22:02.674 EOF 00:22:02.674 )") 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:02.674 { 00:22:02.674 "params": { 00:22:02.674 "name": "Nvme$subsystem", 00:22:02.674 "trtype": "$TEST_TRANSPORT", 00:22:02.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.674 "adrfam": "ipv4", 00:22:02.674 "trsvcid": "$NVMF_PORT", 00:22:02.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.674 "hdgst": ${hdgst:-false}, 00:22:02.674 "ddgst": ${ddgst:-false} 00:22:02.674 }, 00:22:02.674 "method": "bdev_nvme_attach_controller" 00:22:02.674 } 00:22:02.674 EOF 00:22:02.674 )") 00:22:02.674 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:02.932 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:02.932 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:02.932 12:30:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:02.932 "params": { 00:22:02.932 "name": "Nvme1", 00:22:02.932 "trtype": "tcp", 00:22:02.932 "traddr": "10.0.0.2", 00:22:02.932 "adrfam": "ipv4", 00:22:02.932 "trsvcid": "4420", 00:22:02.932 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.932 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.932 "hdgst": false, 00:22:02.932 "ddgst": false 00:22:02.932 }, 00:22:02.932 "method": "bdev_nvme_attach_controller" 00:22:02.932 },{ 00:22:02.932 "params": { 00:22:02.932 "name": "Nvme2", 00:22:02.932 "trtype": "tcp", 00:22:02.932 "traddr": "10.0.0.2", 00:22:02.932 "adrfam": "ipv4", 00:22:02.932 "trsvcid": "4420", 00:22:02.932 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:02.932 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:02.932 "hdgst": false, 00:22:02.932 "ddgst": false 00:22:02.932 }, 00:22:02.932 "method": "bdev_nvme_attach_controller" 00:22:02.932 },{ 00:22:02.932 "params": { 00:22:02.932 "name": "Nvme3", 00:22:02.932 "trtype": "tcp", 00:22:02.932 "traddr": "10.0.0.2", 00:22:02.932 "adrfam": "ipv4", 00:22:02.932 "trsvcid": "4420", 00:22:02.932 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:02.932 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:02.932 "hdgst": false, 00:22:02.932 "ddgst": false 00:22:02.932 }, 00:22:02.932 "method": "bdev_nvme_attach_controller" 00:22:02.932 },{ 00:22:02.932 "params": { 00:22:02.932 "name": "Nvme4", 00:22:02.932 "trtype": "tcp", 00:22:02.933 "traddr": "10.0.0.2", 00:22:02.933 "adrfam": "ipv4", 00:22:02.933 "trsvcid": "4420", 00:22:02.933 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:02.933 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:02.933 "hdgst": false, 00:22:02.933 "ddgst": false 00:22:02.933 }, 00:22:02.933 "method": "bdev_nvme_attach_controller" 00:22:02.933 },{ 00:22:02.933 "params": { 00:22:02.933 "name": "Nvme5", 00:22:02.933 "trtype": "tcp", 00:22:02.933 "traddr": "10.0.0.2", 00:22:02.933 "adrfam": "ipv4", 00:22:02.933 "trsvcid": "4420", 00:22:02.933 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:02.933 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:02.933 "hdgst": false, 00:22:02.933 "ddgst": false 00:22:02.933 }, 00:22:02.933 "method": "bdev_nvme_attach_controller" 00:22:02.933 },{ 00:22:02.933 "params": { 00:22:02.933 "name": "Nvme6", 00:22:02.933 "trtype": "tcp", 00:22:02.933 "traddr": "10.0.0.2", 00:22:02.933 "adrfam": "ipv4", 00:22:02.933 "trsvcid": "4420", 00:22:02.933 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:02.933 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:02.933 "hdgst": false, 00:22:02.933 "ddgst": false 00:22:02.933 }, 00:22:02.933 "method": "bdev_nvme_attach_controller" 00:22:02.933 },{ 00:22:02.933 "params": { 00:22:02.933 "name": "Nvme7", 00:22:02.933 "trtype": "tcp", 00:22:02.933 "traddr": "10.0.0.2", 00:22:02.933 "adrfam": "ipv4", 00:22:02.933 "trsvcid": "4420", 00:22:02.933 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:02.933 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:02.933 "hdgst": false, 00:22:02.933 "ddgst": false 00:22:02.933 }, 00:22:02.933 "method": "bdev_nvme_attach_controller" 00:22:02.933 },{ 00:22:02.933 "params": { 00:22:02.933 "name": "Nvme8", 00:22:02.933 "trtype": "tcp", 00:22:02.933 "traddr": "10.0.0.2", 00:22:02.933 "adrfam": "ipv4", 00:22:02.933 "trsvcid": "4420", 00:22:02.933 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:02.933 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:02.933 "hdgst": false, 00:22:02.933 "ddgst": false 00:22:02.933 }, 00:22:02.933 "method": "bdev_nvme_attach_controller" 00:22:02.933 },{ 00:22:02.933 "params": { 00:22:02.933 "name": "Nvme9", 00:22:02.933 "trtype": "tcp", 00:22:02.933 "traddr": "10.0.0.2", 00:22:02.933 "adrfam": "ipv4", 00:22:02.933 "trsvcid": "4420", 00:22:02.933 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:02.933 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:02.933 "hdgst": false, 00:22:02.933 "ddgst": false 00:22:02.933 }, 00:22:02.933 "method": "bdev_nvme_attach_controller" 00:22:02.933 },{ 00:22:02.933 "params": { 00:22:02.933 "name": "Nvme10", 00:22:02.933 "trtype": "tcp", 00:22:02.933 "traddr": "10.0.0.2", 00:22:02.933 "adrfam": "ipv4", 00:22:02.933 "trsvcid": "4420", 00:22:02.933 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:02.933 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:02.933 "hdgst": false, 00:22:02.933 "ddgst": false 00:22:02.933 }, 00:22:02.933 "method": "bdev_nvme_attach_controller" 00:22:02.933 }' 00:22:02.933 [2024-12-10 12:30:24.889279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.933 [2024-12-10 12:30:24.929861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.830 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.830 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:22:04.830 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:04.830 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.830 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:04.830 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.830 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1686832 00:22:04.830 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:04.830 12:30:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:05.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh: line 74: 1686832 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1686754 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.766 { 00:22:05.766 "params": { 00:22:05.766 "name": "Nvme$subsystem", 00:22:05.766 "trtype": "$TEST_TRANSPORT", 00:22:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.766 "adrfam": "ipv4", 00:22:05.766 "trsvcid": "$NVMF_PORT", 00:22:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.766 "hdgst": ${hdgst:-false}, 00:22:05.766 "ddgst": ${ddgst:-false} 00:22:05.766 }, 00:22:05.766 "method": "bdev_nvme_attach_controller" 00:22:05.766 } 00:22:05.766 EOF 00:22:05.766 )") 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.766 { 00:22:05.766 "params": { 00:22:05.766 "name": "Nvme$subsystem", 00:22:05.766 "trtype": "$TEST_TRANSPORT", 00:22:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.766 "adrfam": "ipv4", 00:22:05.766 "trsvcid": "$NVMF_PORT", 00:22:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.766 "hdgst": ${hdgst:-false}, 00:22:05.766 "ddgst": ${ddgst:-false} 00:22:05.766 }, 00:22:05.766 "method": "bdev_nvme_attach_controller" 00:22:05.766 } 00:22:05.766 EOF 00:22:05.766 )") 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.766 { 00:22:05.766 "params": { 00:22:05.766 "name": "Nvme$subsystem", 00:22:05.766 "trtype": "$TEST_TRANSPORT", 00:22:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.766 "adrfam": "ipv4", 00:22:05.766 "trsvcid": "$NVMF_PORT", 00:22:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.766 "hdgst": ${hdgst:-false}, 00:22:05.766 "ddgst": ${ddgst:-false} 00:22:05.766 }, 00:22:05.766 "method": "bdev_nvme_attach_controller" 00:22:05.766 } 00:22:05.766 EOF 00:22:05.766 )") 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.766 { 00:22:05.766 "params": { 00:22:05.766 "name": "Nvme$subsystem", 00:22:05.766 "trtype": "$TEST_TRANSPORT", 00:22:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.766 "adrfam": "ipv4", 00:22:05.766 "trsvcid": "$NVMF_PORT", 00:22:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.766 "hdgst": ${hdgst:-false}, 00:22:05.766 "ddgst": ${ddgst:-false} 00:22:05.766 }, 00:22:05.766 "method": "bdev_nvme_attach_controller" 00:22:05.766 } 00:22:05.766 EOF 00:22:05.766 )") 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.766 { 00:22:05.766 "params": { 00:22:05.766 "name": "Nvme$subsystem", 00:22:05.766 "trtype": "$TEST_TRANSPORT", 00:22:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.766 "adrfam": "ipv4", 00:22:05.766 "trsvcid": "$NVMF_PORT", 00:22:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.766 "hdgst": ${hdgst:-false}, 00:22:05.766 "ddgst": ${ddgst:-false} 00:22:05.766 }, 00:22:05.766 "method": "bdev_nvme_attach_controller" 00:22:05.766 } 00:22:05.766 EOF 00:22:05.766 )") 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.766 { 00:22:05.766 "params": { 00:22:05.766 "name": "Nvme$subsystem", 00:22:05.766 "trtype": "$TEST_TRANSPORT", 00:22:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.766 "adrfam": "ipv4", 00:22:05.766 "trsvcid": "$NVMF_PORT", 00:22:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.766 "hdgst": ${hdgst:-false}, 00:22:05.766 "ddgst": ${ddgst:-false} 00:22:05.766 }, 00:22:05.766 "method": "bdev_nvme_attach_controller" 00:22:05.766 } 00:22:05.766 EOF 00:22:05.766 )") 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.766 { 00:22:05.766 "params": { 00:22:05.766 "name": "Nvme$subsystem", 00:22:05.766 "trtype": "$TEST_TRANSPORT", 00:22:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.766 "adrfam": "ipv4", 00:22:05.766 "trsvcid": "$NVMF_PORT", 00:22:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.766 "hdgst": ${hdgst:-false}, 00:22:05.766 "ddgst": ${ddgst:-false} 00:22:05.766 }, 00:22:05.766 "method": "bdev_nvme_attach_controller" 00:22:05.766 } 00:22:05.766 EOF 00:22:05.766 )") 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.766 { 00:22:05.766 "params": { 00:22:05.766 "name": "Nvme$subsystem", 00:22:05.766 "trtype": "$TEST_TRANSPORT", 00:22:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.766 "adrfam": "ipv4", 00:22:05.766 "trsvcid": "$NVMF_PORT", 00:22:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.766 "hdgst": ${hdgst:-false}, 00:22:05.766 "ddgst": ${ddgst:-false} 00:22:05.766 }, 00:22:05.766 "method": "bdev_nvme_attach_controller" 00:22:05.766 } 00:22:05.766 EOF 00:22:05.766 )") 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.766 [2024-12-10 12:30:27.747867] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:05.766 [2024-12-10 12:30:27.747917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687324 ] 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.766 { 00:22:05.766 "params": { 00:22:05.766 "name": "Nvme$subsystem", 00:22:05.766 "trtype": "$TEST_TRANSPORT", 00:22:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.766 "adrfam": "ipv4", 00:22:05.766 "trsvcid": "$NVMF_PORT", 00:22:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.766 "hdgst": ${hdgst:-false}, 00:22:05.766 "ddgst": ${ddgst:-false} 00:22:05.766 }, 00:22:05.766 "method": "bdev_nvme_attach_controller" 00:22:05.766 } 00:22:05.766 EOF 00:22:05.766 )") 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:05.766 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:05.766 { 00:22:05.766 "params": { 00:22:05.766 "name": "Nvme$subsystem", 00:22:05.766 "trtype": "$TEST_TRANSPORT", 00:22:05.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:05.766 "adrfam": "ipv4", 00:22:05.766 "trsvcid": "$NVMF_PORT", 00:22:05.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:05.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:05.767 "hdgst": ${hdgst:-false}, 00:22:05.767 "ddgst": ${ddgst:-false} 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 } 00:22:05.767 EOF 00:22:05.767 )") 00:22:05.767 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:05.767 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:05.767 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:05.767 12:30:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:05.767 "params": { 00:22:05.767 "name": "Nvme1", 00:22:05.767 "trtype": "tcp", 00:22:05.767 "traddr": "10.0.0.2", 00:22:05.767 "adrfam": "ipv4", 00:22:05.767 "trsvcid": "4420", 00:22:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.767 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:05.767 "hdgst": false, 00:22:05.767 "ddgst": false 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 },{ 00:22:05.767 "params": { 00:22:05.767 "name": "Nvme2", 00:22:05.767 "trtype": "tcp", 00:22:05.767 "traddr": "10.0.0.2", 00:22:05.767 "adrfam": "ipv4", 00:22:05.767 "trsvcid": "4420", 00:22:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:05.767 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:05.767 "hdgst": false, 00:22:05.767 "ddgst": false 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 },{ 00:22:05.767 "params": { 00:22:05.767 "name": "Nvme3", 00:22:05.767 "trtype": "tcp", 00:22:05.767 "traddr": "10.0.0.2", 00:22:05.767 "adrfam": "ipv4", 00:22:05.767 "trsvcid": "4420", 00:22:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:05.767 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:05.767 "hdgst": false, 00:22:05.767 "ddgst": false 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 },{ 00:22:05.767 "params": { 00:22:05.767 "name": "Nvme4", 00:22:05.767 "trtype": "tcp", 00:22:05.767 "traddr": "10.0.0.2", 00:22:05.767 "adrfam": "ipv4", 00:22:05.767 "trsvcid": "4420", 00:22:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:05.767 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:05.767 "hdgst": false, 00:22:05.767 "ddgst": false 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 },{ 00:22:05.767 "params": { 00:22:05.767 "name": "Nvme5", 00:22:05.767 "trtype": "tcp", 00:22:05.767 "traddr": "10.0.0.2", 00:22:05.767 "adrfam": "ipv4", 00:22:05.767 "trsvcid": "4420", 00:22:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:05.767 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:05.767 "hdgst": false, 00:22:05.767 "ddgst": false 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 },{ 00:22:05.767 "params": { 00:22:05.767 "name": "Nvme6", 00:22:05.767 "trtype": "tcp", 00:22:05.767 "traddr": "10.0.0.2", 00:22:05.767 "adrfam": "ipv4", 00:22:05.767 "trsvcid": "4420", 00:22:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:05.767 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:05.767 "hdgst": false, 00:22:05.767 "ddgst": false 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 },{ 00:22:05.767 "params": { 00:22:05.767 "name": "Nvme7", 00:22:05.767 "trtype": "tcp", 00:22:05.767 "traddr": "10.0.0.2", 00:22:05.767 "adrfam": "ipv4", 00:22:05.767 "trsvcid": "4420", 00:22:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:05.767 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:05.767 "hdgst": false, 00:22:05.767 "ddgst": false 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 },{ 00:22:05.767 "params": { 00:22:05.767 "name": "Nvme8", 00:22:05.767 "trtype": "tcp", 00:22:05.767 "traddr": "10.0.0.2", 00:22:05.767 "adrfam": "ipv4", 00:22:05.767 "trsvcid": "4420", 00:22:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:05.767 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:05.767 "hdgst": false, 00:22:05.767 "ddgst": false 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 },{ 00:22:05.767 "params": { 00:22:05.767 "name": "Nvme9", 00:22:05.767 "trtype": "tcp", 00:22:05.767 "traddr": "10.0.0.2", 00:22:05.767 "adrfam": "ipv4", 00:22:05.767 "trsvcid": "4420", 00:22:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:05.767 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:05.767 "hdgst": false, 00:22:05.767 "ddgst": false 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 },{ 00:22:05.767 "params": { 00:22:05.767 "name": "Nvme10", 00:22:05.767 "trtype": "tcp", 00:22:05.767 "traddr": "10.0.0.2", 00:22:05.767 "adrfam": "ipv4", 00:22:05.767 "trsvcid": "4420", 00:22:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:05.767 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:05.767 "hdgst": false, 00:22:05.767 "ddgst": false 00:22:05.767 }, 00:22:05.767 "method": "bdev_nvme_attach_controller" 00:22:05.767 }' 00:22:05.767 [2024-12-10 12:30:27.827517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.767 [2024-12-10 12:30:27.867976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.139 Running I/O for 1 seconds... 00:22:08.331 2190.00 IOPS, 136.88 MiB/s 00:22:08.331 Latency(us) 00:22:08.331 [2024-12-10T11:30:30.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.331 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.331 Verification LBA range: start 0x0 length 0x400 00:22:08.331 Nvme1n1 : 1.15 282.67 17.67 0.00 0.00 222444.61 12195.39 220656.86 00:22:08.331 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.331 Verification LBA range: start 0x0 length 0x400 00:22:08.331 Nvme2n1 : 1.06 242.01 15.13 0.00 0.00 258071.15 15158.76 237069.36 00:22:08.331 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.331 Verification LBA range: start 0x0 length 0x400 00:22:08.331 Nvme3n1 : 1.13 282.92 17.68 0.00 0.00 217282.29 22225.25 212450.62 00:22:08.331 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.331 Verification LBA range: start 0x0 length 0x400 00:22:08.331 Nvme4n1 : 1.14 284.08 17.76 0.00 0.00 213037.29 4074.63 223392.28 00:22:08.331 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.331 Verification LBA range: start 0x0 length 0x400 00:22:08.331 Nvme5n1 : 1.16 276.55 17.28 0.00 0.00 215524.93 15158.76 206979.78 00:22:08.331 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.331 Verification LBA range: start 0x0 length 0x400 00:22:08.331 Nvme6n1 : 1.09 235.77 14.74 0.00 0.00 249080.43 17438.27 242540.19 00:22:08.331 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.331 Verification LBA range: start 0x0 length 0x400 00:22:08.331 Nvme7n1 : 1.15 286.88 17.93 0.00 0.00 202242.88 1631.28 217009.64 00:22:08.331 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.331 Verification LBA range: start 0x0 length 0x400 00:22:08.331 Nvme8n1 : 1.15 278.55 17.41 0.00 0.00 205505.40 15500.69 251658.24 00:22:08.331 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.331 Verification LBA range: start 0x0 length 0x400 00:22:08.331 Nvme9n1 : 1.16 275.20 17.20 0.00 0.00 205065.35 17780.20 227951.30 00:22:08.331 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:08.331 Verification LBA range: start 0x0 length 0x400 00:22:08.331 Nvme10n1 : 1.16 275.79 17.24 0.00 0.00 201356.78 14360.93 249834.63 00:22:08.332 [2024-12-10T11:30:30.500Z] =================================================================================================================== 00:22:08.332 [2024-12-10T11:30:30.500Z] Total : 2720.41 170.03 0.00 0.00 217476.51 1631.28 251658.24 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.590 rmmod nvme_tcp 00:22:08.590 rmmod nvme_fabrics 00:22:08.590 rmmod nvme_keyring 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1686754 ']' 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1686754 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1686754 ']' 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1686754 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1686754 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1686754' 00:22:08.590 killing process with pid 1686754 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1686754 00:22:08.590 12:30:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1686754 00:22:08.850 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:08.850 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:08.850 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:08.850 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:08.850 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:08.850 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:08.850 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:09.109 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:09.109 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:09.109 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.109 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.109 12:30:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.014 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:11.014 00:22:11.014 real 0m15.186s 00:22:11.014 user 0m33.551s 00:22:11.014 sys 0m5.751s 00:22:11.014 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.014 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:11.014 ************************************ 00:22:11.014 END TEST nvmf_shutdown_tc1 00:22:11.014 ************************************ 00:22:11.014 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:11.014 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:11.014 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.014 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:11.014 ************************************ 00:22:11.014 START TEST nvmf_shutdown_tc2 00:22:11.014 ************************************ 00:22:11.014 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:22:11.014 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:11.014 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:11.015 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:11.275 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:11.275 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:11.275 Found net devices under 0000:86:00.0: cvl_0_0 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:11.275 Found net devices under 0000:86:00.1: cvl_0_1 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:11.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:22:11.275 00:22:11.275 --- 10.0.0.2 ping statistics --- 00:22:11.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.275 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:11.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:22:11.275 00:22:11.275 --- 10.0.0.1 ping statistics --- 00:22:11.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.275 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.275 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1688354 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1688354 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1688354 ']' 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.534 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.534 [2024-12-10 12:30:33.506750] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:11.534 [2024-12-10 12:30:33.506795] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.534 [2024-12-10 12:30:33.585149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.534 [2024-12-10 12:30:33.626796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.534 [2024-12-10 12:30:33.626832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.534 [2024-12-10 12:30:33.626839] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.534 [2024-12-10 12:30:33.626845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.534 [2024-12-10 12:30:33.626850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.534 [2024-12-10 12:30:33.628338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.534 [2024-12-10 12:30:33.628446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.534 [2024-12-10 12:30:33.628573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.534 [2024-12-10 12:30:33.628574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.794 [2024-12-10 12:30:33.770862] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.794 12:30:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:11.794 Malloc1 00:22:11.794 [2024-12-10 12:30:33.885282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.794 Malloc2 00:22:11.794 Malloc3 00:22:12.052 Malloc4 00:22:12.052 Malloc5 00:22:12.052 Malloc6 00:22:12.052 Malloc7 00:22:12.052 Malloc8 00:22:12.052 Malloc9 00:22:12.312 Malloc10 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1688618 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1688618 /var/tmp/bdevperf.sock 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1688618 ']' 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.312 { 00:22:12.312 "params": { 00:22:12.312 "name": "Nvme$subsystem", 00:22:12.312 "trtype": "$TEST_TRANSPORT", 00:22:12.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.312 "adrfam": "ipv4", 00:22:12.312 "trsvcid": "$NVMF_PORT", 00:22:12.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.312 "hdgst": ${hdgst:-false}, 00:22:12.312 "ddgst": ${ddgst:-false} 00:22:12.312 }, 00:22:12.312 "method": "bdev_nvme_attach_controller" 00:22:12.312 } 00:22:12.312 EOF 00:22:12.312 )") 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.312 { 00:22:12.312 "params": { 00:22:12.312 "name": "Nvme$subsystem", 00:22:12.312 "trtype": "$TEST_TRANSPORT", 00:22:12.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.312 "adrfam": "ipv4", 00:22:12.312 "trsvcid": "$NVMF_PORT", 00:22:12.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.312 "hdgst": ${hdgst:-false}, 00:22:12.312 "ddgst": ${ddgst:-false} 00:22:12.312 }, 00:22:12.312 "method": "bdev_nvme_attach_controller" 00:22:12.312 } 00:22:12.312 EOF 00:22:12.312 )") 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.312 { 00:22:12.312 "params": { 00:22:12.312 "name": "Nvme$subsystem", 00:22:12.312 "trtype": "$TEST_TRANSPORT", 00:22:12.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.312 "adrfam": "ipv4", 00:22:12.312 "trsvcid": "$NVMF_PORT", 00:22:12.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.312 "hdgst": ${hdgst:-false}, 00:22:12.312 "ddgst": ${ddgst:-false} 00:22:12.312 }, 00:22:12.312 "method": "bdev_nvme_attach_controller" 00:22:12.312 } 00:22:12.312 EOF 00:22:12.312 )") 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.312 { 00:22:12.312 "params": { 00:22:12.312 "name": "Nvme$subsystem", 00:22:12.312 "trtype": "$TEST_TRANSPORT", 00:22:12.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.312 "adrfam": "ipv4", 00:22:12.312 "trsvcid": "$NVMF_PORT", 00:22:12.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.312 "hdgst": ${hdgst:-false}, 00:22:12.312 "ddgst": ${ddgst:-false} 00:22:12.312 }, 00:22:12.312 "method": "bdev_nvme_attach_controller" 00:22:12.312 } 00:22:12.312 EOF 00:22:12.312 )") 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.312 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.312 { 00:22:12.312 "params": { 00:22:12.313 "name": "Nvme$subsystem", 00:22:12.313 "trtype": "$TEST_TRANSPORT", 00:22:12.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "$NVMF_PORT", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.313 "hdgst": ${hdgst:-false}, 00:22:12.313 "ddgst": ${ddgst:-false} 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 } 00:22:12.313 EOF 00:22:12.313 )") 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.313 { 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme$subsystem", 00:22:12.313 "trtype": "$TEST_TRANSPORT", 00:22:12.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "$NVMF_PORT", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.313 "hdgst": ${hdgst:-false}, 00:22:12.313 "ddgst": ${ddgst:-false} 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 } 00:22:12.313 EOF 00:22:12.313 )") 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.313 { 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme$subsystem", 00:22:12.313 "trtype": "$TEST_TRANSPORT", 00:22:12.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "$NVMF_PORT", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.313 "hdgst": ${hdgst:-false}, 00:22:12.313 "ddgst": ${ddgst:-false} 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 } 00:22:12.313 EOF 00:22:12.313 )") 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.313 [2024-12-10 12:30:34.358748] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:12.313 [2024-12-10 12:30:34.358797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688618 ] 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.313 { 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme$subsystem", 00:22:12.313 "trtype": "$TEST_TRANSPORT", 00:22:12.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "$NVMF_PORT", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.313 "hdgst": ${hdgst:-false}, 00:22:12.313 "ddgst": ${ddgst:-false} 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 } 00:22:12.313 EOF 00:22:12.313 )") 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.313 { 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme$subsystem", 00:22:12.313 "trtype": "$TEST_TRANSPORT", 00:22:12.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "$NVMF_PORT", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.313 "hdgst": ${hdgst:-false}, 00:22:12.313 "ddgst": ${ddgst:-false} 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 } 00:22:12.313 EOF 00:22:12.313 )") 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:12.313 { 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme$subsystem", 00:22:12.313 "trtype": "$TEST_TRANSPORT", 00:22:12.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "$NVMF_PORT", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.313 "hdgst": ${hdgst:-false}, 00:22:12.313 "ddgst": ${ddgst:-false} 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 } 00:22:12.313 EOF 00:22:12.313 )") 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:12.313 12:30:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme1", 00:22:12.313 "trtype": "tcp", 00:22:12.313 "traddr": "10.0.0.2", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "4420", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.313 "hdgst": false, 00:22:12.313 "ddgst": false 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 },{ 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme2", 00:22:12.313 "trtype": "tcp", 00:22:12.313 "traddr": "10.0.0.2", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "4420", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.313 "hdgst": false, 00:22:12.313 "ddgst": false 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 },{ 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme3", 00:22:12.313 "trtype": "tcp", 00:22:12.313 "traddr": "10.0.0.2", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "4420", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:12.313 "hdgst": false, 00:22:12.313 "ddgst": false 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 },{ 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme4", 00:22:12.313 "trtype": "tcp", 00:22:12.313 "traddr": "10.0.0.2", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "4420", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:12.313 "hdgst": false, 00:22:12.313 "ddgst": false 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 },{ 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme5", 00:22:12.313 "trtype": "tcp", 00:22:12.313 "traddr": "10.0.0.2", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "4420", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:12.313 "hdgst": false, 00:22:12.313 "ddgst": false 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 },{ 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme6", 00:22:12.313 "trtype": "tcp", 00:22:12.313 "traddr": "10.0.0.2", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "4420", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:12.313 "hdgst": false, 00:22:12.313 "ddgst": false 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 },{ 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme7", 00:22:12.313 "trtype": "tcp", 00:22:12.313 "traddr": "10.0.0.2", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "4420", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:12.313 "hdgst": false, 00:22:12.313 "ddgst": false 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 },{ 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme8", 00:22:12.313 "trtype": "tcp", 00:22:12.313 "traddr": "10.0.0.2", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "4420", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:12.313 "hdgst": false, 00:22:12.313 "ddgst": false 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 },{ 00:22:12.313 "params": { 00:22:12.313 "name": "Nvme9", 00:22:12.313 "trtype": "tcp", 00:22:12.313 "traddr": "10.0.0.2", 00:22:12.313 "adrfam": "ipv4", 00:22:12.313 "trsvcid": "4420", 00:22:12.313 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:12.313 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:12.313 "hdgst": false, 00:22:12.313 "ddgst": false 00:22:12.313 }, 00:22:12.313 "method": "bdev_nvme_attach_controller" 00:22:12.313 },{ 00:22:12.313 "params": { 00:22:12.314 "name": "Nvme10", 00:22:12.314 "trtype": "tcp", 00:22:12.314 "traddr": "10.0.0.2", 00:22:12.314 "adrfam": "ipv4", 00:22:12.314 "trsvcid": "4420", 00:22:12.314 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:12.314 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:12.314 "hdgst": false, 00:22:12.314 "ddgst": false 00:22:12.314 }, 00:22:12.314 "method": "bdev_nvme_attach_controller" 00:22:12.314 }' 00:22:12.314 [2024-12-10 12:30:34.434198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.314 [2024-12-10 12:30:34.475000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.315 Running I/O for 10 seconds... 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:14.315 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:14.600 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:14.600 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:14.600 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:14.600 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:14.600 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.600 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.600 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.600 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:14.600 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:14.600 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1688618 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1688618 ']' 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1688618 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1688618 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1688618' 00:22:14.879 killing process with pid 1688618 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1688618 00:22:14.879 12:30:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1688618 00:22:14.879 Received shutdown signal, test time was about 0.986238 seconds 00:22:14.879 00:22:14.879 Latency(us) 00:22:14.879 [2024-12-10T11:30:37.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.879 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:14.879 Verification LBA range: start 0x0 length 0x400 00:22:14.879 Nvme1n1 : 0.95 269.93 16.87 0.00 0.00 234229.98 15386.71 266247.12 00:22:14.879 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:14.879 Verification LBA range: start 0x0 length 0x400 00:22:14.879 Nvme2n1 : 0.92 212.14 13.26 0.00 0.00 291017.64 5014.93 258952.68 00:22:14.879 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:14.879 Verification LBA range: start 0x0 length 0x400 00:22:14.879 Nvme3n1 : 0.93 282.73 17.67 0.00 0.00 215676.55 3846.68 257129.07 00:22:14.879 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:14.879 Verification LBA range: start 0x0 length 0x400 00:22:14.879 Nvme4n1 : 0.94 272.22 17.01 0.00 0.00 220669.55 17324.30 255305.46 00:22:14.879 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:14.879 Verification LBA range: start 0x0 length 0x400 00:22:14.879 Nvme5n1 : 0.92 209.23 13.08 0.00 0.00 281395.79 40119.43 242540.19 00:22:14.879 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:14.879 Verification LBA range: start 0x0 length 0x400 00:22:14.879 Nvme6n1 : 0.99 264.82 16.55 0.00 0.00 210084.09 12480.33 253481.85 00:22:14.879 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:14.879 Verification LBA range: start 0x0 length 0x400 00:22:14.879 Nvme7n1 : 0.94 275.22 17.20 0.00 0.00 206265.03 2863.64 257129.07 00:22:14.879 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:14.879 Verification LBA range: start 0x0 length 0x400 00:22:14.879 Nvme8n1 : 0.92 208.22 13.01 0.00 0.00 267189.95 14019.01 268070.73 00:22:14.879 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:14.879 Verification LBA range: start 0x0 length 0x400 00:22:14.879 Nvme9n1 : 0.93 205.68 12.86 0.00 0.00 265707.52 19831.76 280836.01 00:22:14.879 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:14.879 Verification LBA range: start 0x0 length 0x400 00:22:14.879 Nvme10n1 : 0.94 205.23 12.83 0.00 0.00 261213.79 37156.06 258952.68 00:22:14.879 [2024-12-10T11:30:37.047Z] =================================================================================================================== 00:22:14.879 [2024-12-10T11:30:37.047Z] Total : 2405.42 150.34 0.00 0.00 241227.82 2863.64 280836.01 00:22:15.137 12:30:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1688354 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:16.071 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:16.071 rmmod nvme_tcp 00:22:16.329 rmmod nvme_fabrics 00:22:16.329 rmmod nvme_keyring 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1688354 ']' 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1688354 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1688354 ']' 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1688354 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1688354 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1688354' 00:22:16.329 killing process with pid 1688354 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1688354 00:22:16.329 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1688354 00:22:16.587 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:16.587 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:16.587 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:16.587 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:16.587 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:16.587 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:16.587 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:16.587 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:16.587 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:16.588 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.588 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.588 12:30:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:19.119 00:22:19.119 real 0m7.621s 00:22:19.119 user 0m23.071s 00:22:19.119 sys 0m1.341s 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:19.119 ************************************ 00:22:19.119 END TEST nvmf_shutdown_tc2 00:22:19.119 ************************************ 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:19.119 ************************************ 00:22:19.119 START TEST nvmf_shutdown_tc3 00:22:19.119 ************************************ 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:19.119 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.120 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.120 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.120 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.120 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.120 12:30:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.120 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.120 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.120 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:19.120 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.120 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.120 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.120 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:19.120 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:19.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:22:19.120 00:22:19.120 --- 10.0.0.2 ping statistics --- 00:22:19.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.120 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:19.120 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:22:19.121 00:22:19.121 --- 10.0.0.1 ping statistics --- 00:22:19.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.121 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:19.121 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.379 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1689901 00:22:19.379 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1689901 00:22:19.379 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:19.379 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1689901 ']' 00:22:19.379 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.379 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.379 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.379 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.379 12:30:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:19.379 [2024-12-10 12:30:41.342223] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:19.379 [2024-12-10 12:30:41.342267] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.379 [2024-12-10 12:30:41.421319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.379 [2024-12-10 12:30:41.461019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.379 [2024-12-10 12:30:41.461055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.379 [2024-12-10 12:30:41.461062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.379 [2024-12-10 12:30:41.461068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.379 [2024-12-10 12:30:41.461073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.379 [2024-12-10 12:30:41.462669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.379 [2024-12-10 12:30:41.462774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.379 [2024-12-10 12:30:41.462861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.379 [2024-12-10 12:30:41.462862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.313 [2024-12-10 12:30:42.222844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.313 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.313 Malloc1 00:22:20.313 [2024-12-10 12:30:42.341833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.313 Malloc2 00:22:20.313 Malloc3 00:22:20.313 Malloc4 00:22:20.571 Malloc5 00:22:20.571 Malloc6 00:22:20.571 Malloc7 00:22:20.571 Malloc8 00:22:20.571 Malloc9 00:22:20.571 Malloc10 00:22:20.571 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.571 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:20.571 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.571 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.829 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1690183 00:22:20.829 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1690183 /var/tmp/bdevperf.sock 00:22:20.829 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1690183 ']' 00:22:20.829 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.829 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:20.829 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.830 { 00:22:20.830 "params": { 00:22:20.830 "name": "Nvme$subsystem", 00:22:20.830 "trtype": "$TEST_TRANSPORT", 00:22:20.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.830 "adrfam": "ipv4", 00:22:20.830 "trsvcid": "$NVMF_PORT", 00:22:20.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.830 "hdgst": ${hdgst:-false}, 00:22:20.830 "ddgst": ${ddgst:-false} 00:22:20.830 }, 00:22:20.830 "method": "bdev_nvme_attach_controller" 00:22:20.830 } 00:22:20.830 EOF 00:22:20.830 )") 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.830 { 00:22:20.830 "params": { 00:22:20.830 "name": "Nvme$subsystem", 00:22:20.830 "trtype": "$TEST_TRANSPORT", 00:22:20.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.830 "adrfam": "ipv4", 00:22:20.830 "trsvcid": "$NVMF_PORT", 00:22:20.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.830 "hdgst": ${hdgst:-false}, 00:22:20.830 "ddgst": ${ddgst:-false} 00:22:20.830 }, 00:22:20.830 "method": "bdev_nvme_attach_controller" 00:22:20.830 } 00:22:20.830 EOF 00:22:20.830 )") 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.830 { 00:22:20.830 "params": { 00:22:20.830 "name": "Nvme$subsystem", 00:22:20.830 "trtype": "$TEST_TRANSPORT", 00:22:20.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.830 "adrfam": "ipv4", 00:22:20.830 "trsvcid": "$NVMF_PORT", 00:22:20.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.830 "hdgst": ${hdgst:-false}, 00:22:20.830 "ddgst": ${ddgst:-false} 00:22:20.830 }, 00:22:20.830 "method": "bdev_nvme_attach_controller" 00:22:20.830 } 00:22:20.830 EOF 00:22:20.830 )") 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.830 { 00:22:20.830 "params": { 00:22:20.830 "name": "Nvme$subsystem", 00:22:20.830 "trtype": "$TEST_TRANSPORT", 00:22:20.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.830 "adrfam": "ipv4", 00:22:20.830 "trsvcid": "$NVMF_PORT", 00:22:20.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.830 "hdgst": ${hdgst:-false}, 00:22:20.830 "ddgst": ${ddgst:-false} 00:22:20.830 }, 00:22:20.830 "method": "bdev_nvme_attach_controller" 00:22:20.830 } 00:22:20.830 EOF 00:22:20.830 )") 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.830 { 00:22:20.830 "params": { 00:22:20.830 "name": "Nvme$subsystem", 00:22:20.830 "trtype": "$TEST_TRANSPORT", 00:22:20.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.830 "adrfam": "ipv4", 00:22:20.830 "trsvcid": "$NVMF_PORT", 00:22:20.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.830 "hdgst": ${hdgst:-false}, 00:22:20.830 "ddgst": ${ddgst:-false} 00:22:20.830 }, 00:22:20.830 "method": "bdev_nvme_attach_controller" 00:22:20.830 } 00:22:20.830 EOF 00:22:20.830 )") 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.830 { 00:22:20.830 "params": { 00:22:20.830 "name": "Nvme$subsystem", 00:22:20.830 "trtype": "$TEST_TRANSPORT", 00:22:20.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.830 "adrfam": "ipv4", 00:22:20.830 "trsvcid": "$NVMF_PORT", 00:22:20.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.830 "hdgst": ${hdgst:-false}, 00:22:20.830 "ddgst": ${ddgst:-false} 00:22:20.830 }, 00:22:20.830 "method": "bdev_nvme_attach_controller" 00:22:20.830 } 00:22:20.830 EOF 00:22:20.830 )") 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.830 { 00:22:20.830 "params": { 00:22:20.830 "name": "Nvme$subsystem", 00:22:20.830 "trtype": "$TEST_TRANSPORT", 00:22:20.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.830 "adrfam": "ipv4", 00:22:20.830 "trsvcid": "$NVMF_PORT", 00:22:20.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.830 "hdgst": ${hdgst:-false}, 00:22:20.830 "ddgst": ${ddgst:-false} 00:22:20.830 }, 00:22:20.830 "method": "bdev_nvme_attach_controller" 00:22:20.830 } 00:22:20.830 EOF 00:22:20.830 )") 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:20.830 [2024-12-10 12:30:42.816636] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:20.830 [2024-12-10 12:30:42.816687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1690183 ] 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.830 { 00:22:20.830 "params": { 00:22:20.830 "name": "Nvme$subsystem", 00:22:20.830 "trtype": "$TEST_TRANSPORT", 00:22:20.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.830 "adrfam": "ipv4", 00:22:20.830 "trsvcid": "$NVMF_PORT", 00:22:20.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.830 "hdgst": ${hdgst:-false}, 00:22:20.830 "ddgst": ${ddgst:-false} 00:22:20.830 }, 00:22:20.830 "method": "bdev_nvme_attach_controller" 00:22:20.830 } 00:22:20.830 EOF 00:22:20.830 )") 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.830 { 00:22:20.830 "params": { 00:22:20.830 "name": "Nvme$subsystem", 00:22:20.830 "trtype": "$TEST_TRANSPORT", 00:22:20.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.830 "adrfam": "ipv4", 00:22:20.830 "trsvcid": "$NVMF_PORT", 00:22:20.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.830 "hdgst": ${hdgst:-false}, 00:22:20.830 "ddgst": ${ddgst:-false} 00:22:20.830 }, 00:22:20.830 "method": "bdev_nvme_attach_controller" 00:22:20.830 } 00:22:20.830 EOF 00:22:20.830 )") 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:20.830 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:20.830 { 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme$subsystem", 00:22:20.831 "trtype": "$TEST_TRANSPORT", 00:22:20.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "$NVMF_PORT", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.831 "hdgst": ${hdgst:-false}, 00:22:20.831 "ddgst": ${ddgst:-false} 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 } 00:22:20.831 EOF 00:22:20.831 )") 00:22:20.831 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:20.831 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:20.831 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:20.831 12:30:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme1", 00:22:20.831 "trtype": "tcp", 00:22:20.831 "traddr": "10.0.0.2", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "4420", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.831 "hdgst": false, 00:22:20.831 "ddgst": false 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 },{ 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme2", 00:22:20.831 "trtype": "tcp", 00:22:20.831 "traddr": "10.0.0.2", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "4420", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:20.831 "hdgst": false, 00:22:20.831 "ddgst": false 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 },{ 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme3", 00:22:20.831 "trtype": "tcp", 00:22:20.831 "traddr": "10.0.0.2", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "4420", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:20.831 "hdgst": false, 00:22:20.831 "ddgst": false 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 },{ 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme4", 00:22:20.831 "trtype": "tcp", 00:22:20.831 "traddr": "10.0.0.2", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "4420", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:20.831 "hdgst": false, 00:22:20.831 "ddgst": false 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 },{ 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme5", 00:22:20.831 "trtype": "tcp", 00:22:20.831 "traddr": "10.0.0.2", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "4420", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:20.831 "hdgst": false, 00:22:20.831 "ddgst": false 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 },{ 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme6", 00:22:20.831 "trtype": "tcp", 00:22:20.831 "traddr": "10.0.0.2", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "4420", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:20.831 "hdgst": false, 00:22:20.831 "ddgst": false 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 },{ 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme7", 00:22:20.831 "trtype": "tcp", 00:22:20.831 "traddr": "10.0.0.2", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "4420", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:20.831 "hdgst": false, 00:22:20.831 "ddgst": false 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 },{ 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme8", 00:22:20.831 "trtype": "tcp", 00:22:20.831 "traddr": "10.0.0.2", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "4420", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:20.831 "hdgst": false, 00:22:20.831 "ddgst": false 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 },{ 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme9", 00:22:20.831 "trtype": "tcp", 00:22:20.831 "traddr": "10.0.0.2", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "4420", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:20.831 "hdgst": false, 00:22:20.831 "ddgst": false 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 },{ 00:22:20.831 "params": { 00:22:20.831 "name": "Nvme10", 00:22:20.831 "trtype": "tcp", 00:22:20.831 "traddr": "10.0.0.2", 00:22:20.831 "adrfam": "ipv4", 00:22:20.831 "trsvcid": "4420", 00:22:20.831 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:20.831 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:20.831 "hdgst": false, 00:22:20.831 "ddgst": false 00:22:20.831 }, 00:22:20.831 "method": "bdev_nvme_attach_controller" 00:22:20.831 }' 00:22:20.831 [2024-12-10 12:30:42.893025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.831 [2024-12-10 12:30:42.933256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.728 Running I/O for 10 seconds... 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.728 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.986 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:22.986 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:22.986 12:30:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:23.243 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:23.243 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:23.243 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:23.243 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:23.243 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.243 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.243 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.243 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:23.243 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:23.243 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1689901 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1689901 ']' 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1689901 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1689901 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1689901' 00:22:23.516 killing process with pid 1689901 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1689901 00:22:23.516 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1689901 00:22:23.516 [2024-12-10 12:30:45.571030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.516 [2024-12-10 12:30:45.571111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.516 [2024-12-10 12:30:45.571120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.571516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1693d70 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.573953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.573989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.573997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.517 [2024-12-10 12:30:45.574130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.574397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694240 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.518 [2024-12-10 12:30:45.575925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.575997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.576004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.576010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.576016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.576022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.576028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.576034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.576040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.576046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694710 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.519 [2024-12-10 12:30:45.577432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.577439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.577446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1694c00 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.578077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16950d0 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.578098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16950d0 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.578105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16950d0 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.578112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16950d0 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.578748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695450 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.578767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695450 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.578778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695450 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.578784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695450 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.579999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695920 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.520 [2024-12-10 12:30:45.580917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.580996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1695e10 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.521 [2024-12-10 12:30:45.581952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.581958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.581964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.581971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.581977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.581983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.581989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.581999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.582238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18067a0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.588645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe50 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.588764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a300 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.588845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2d3d0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.588934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.522 [2024-12-10 12:30:45.588987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.522 [2024-12-10 12:30:45.588993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57bd0 is same with the state(6) to be set 00:22:23.522 [2024-12-10 12:30:45.589018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa579b0 is same with the state(6) to be set 00:22:23.523 [2024-12-10 12:30:45.589099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3c90 is same with the state(6) to be set 00:22:23.523 [2024-12-10 12:30:45.589189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x504610 is same with the state(6) to be set 00:22:23.523 [2024-12-10 12:30:45.589273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1b280 is same with the state(6) to be set 00:22:23.523 [2024-12-10 12:30:45.589355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9c0 is same with the state(6) to be set 00:22:23.523 [2024-12-10 12:30:45.589438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.523 [2024-12-10 12:30:45.589489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3e90 is same with the state(6) to be set 00:22:23.523 [2024-12-10 12:30:45.589951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.589976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.589991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.589999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.523 [2024-12-10 12:30:45.590175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.523 [2024-12-10 12:30:45.590182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.524 [2024-12-10 12:30:45.590783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.524 [2024-12-10 12:30:45.590790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.590968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.590976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:23.525 [2024-12-10 12:30:45.591371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.525 [2024-12-10 12:30:45.591665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.525 [2024-12-10 12:30:45.591674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.591838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.591846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.526 [2024-12-10 12:30:45.598471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.526 [2024-12-10 12:30:45.598478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.598486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.598494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.598503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.598509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.598517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.598525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.598533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.598539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.598549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.598556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.598564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.598571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.598596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:23.527 [2024-12-10 12:30:45.598999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5efe50 (9): Bad file descriptor 00:22:23.527 [2024-12-10 12:30:45.599029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1a300 (9): Bad file descriptor 00:22:23.527 [2024-12-10 12:30:45.599043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2d3d0 (9): Bad file descriptor 00:22:23.527 [2024-12-10 12:30:45.599061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa57bd0 (9): Bad file descriptor 00:22:23.527 [2024-12-10 12:30:45.599074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa579b0 (9): Bad file descriptor 00:22:23.527 [2024-12-10 12:30:45.599087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e3c90 (9): Bad file descriptor 00:22:23.527 [2024-12-10 12:30:45.599099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x504610 (9): Bad file descriptor 00:22:23.527 [2024-12-10 12:30:45.599111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1b280 (9): Bad file descriptor 00:22:23.527 [2024-12-10 12:30:45.599126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef9c0 (9): Bad file descriptor 00:22:23.527 [2024-12-10 12:30:45.599140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e3e90 (9): Bad file descriptor 00:22:23.527 [2024-12-10 12:30:45.601410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.527 [2024-12-10 12:30:45.601876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.527 [2024-12-10 12:30:45.601884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.601891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.601899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.601906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.601914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.601923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.601932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.601939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.601947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.601955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.601963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.601970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.601980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.601987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.601996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.602532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.528 [2024-12-10 12:30:45.602541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.528 [2024-12-10 12:30:45.604281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:23.528 [2024-12-10 12:30:45.604317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:23.528 [2024-12-10 12:30:45.605331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:23.528 [2024-12-10 12:30:45.605549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.528 [2024-12-10 12:30:45.605571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1a300 with addr=10.0.0.2, port=4420 00:22:23.528 [2024-12-10 12:30:45.605583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a300 is same with the state(6) to be set 00:22:23.528 [2024-12-10 12:30:45.605744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.528 [2024-12-10 12:30:45.605760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e3c90 with addr=10.0.0.2, port=4420 00:22:23.528 [2024-12-10 12:30:45.605770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3c90 is same with the state(6) to be set 00:22:23.528 [2024-12-10 12:30:45.605817] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.529 [2024-12-10 12:30:45.605872] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.529 [2024-12-10 12:30:45.605924] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.529 [2024-12-10 12:30:45.605976] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.529 [2024-12-10 12:30:45.606054] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.529 [2024-12-10 12:30:45.606105] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.529 [2024-12-10 12:30:45.606155] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:23.529 [2024-12-10 12:30:45.606602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.529 [2024-12-10 12:30:45.606622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa57bd0 with addr=10.0.0.2, port=4420 00:22:23.529 [2024-12-10 12:30:45.606634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57bd0 is same with the state(6) to be set 00:22:23.529 [2024-12-10 12:30:45.606650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1a300 (9): Bad file descriptor 00:22:23.529 [2024-12-10 12:30:45.606670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e3c90 (9): Bad file descriptor 00:22:23.529 [2024-12-10 12:30:45.606789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa57bd0 (9): Bad file descriptor 00:22:23.529 [2024-12-10 12:30:45.606806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:23.529 [2024-12-10 12:30:45.606816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:23.529 [2024-12-10 12:30:45.606828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:23.529 [2024-12-10 12:30:45.606839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:23.529 [2024-12-10 12:30:45.606851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:23.529 [2024-12-10 12:30:45.606860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:23.529 [2024-12-10 12:30:45.606876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:23.529 [2024-12-10 12:30:45.606885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:23.529 [2024-12-10 12:30:45.606943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:23.529 [2024-12-10 12:30:45.606953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:23.529 [2024-12-10 12:30:45.606962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:23.529 [2024-12-10 12:30:45.606971] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:23.529 [2024-12-10 12:30:45.609127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.529 [2024-12-10 12:30:45.609731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.529 [2024-12-10 12:30:45.609740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.609987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.609996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.530 [2024-12-10 12:30:45.610496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.530 [2024-12-10 12:30:45.610507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3f10 is same with the state(6) to be set 00:22:23.530 [2024-12-10 12:30:45.611927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.611945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.611960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.611971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.611984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.611995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.531 [2024-12-10 12:30:45.612700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.531 [2024-12-10 12:30:45.612708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.612990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.612997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.613006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.613013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.613021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.613028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.613036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.613043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.613051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.613058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.613067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.613074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.613081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5050 is same with the state(6) to be set 00:22:23.532 [2024-12-10 12:30:45.614131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.532 [2024-12-10 12:30:45.614388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.532 [2024-12-10 12:30:45.614397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.614991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.614998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.615007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.615014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.533 [2024-12-10 12:30:45.615022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.533 [2024-12-10 12:30:45.615030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.615038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.615045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.615053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.615061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.615070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.615077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.615085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.615092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.615100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.615108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.615116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.615123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.615132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.615139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.615147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e94f0 is same with the state(6) to be set 00:22:23.534 [2024-12-10 12:30:45.616197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.534 [2024-12-10 12:30:45.616667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.534 [2024-12-10 12:30:45.616675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.616985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.616992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.617244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.617252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6610 is same with the state(6) to be set 00:22:23.535 [2024-12-10 12:30:45.618309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.618325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.618337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.618345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.618353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.618361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.618370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.618380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.535 [2024-12-10 12:30:45.618389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.535 [2024-12-10 12:30:45.618397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.618987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.618993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.619002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.619009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.619018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.619024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.619033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.536 [2024-12-10 12:30:45.619039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.536 [2024-12-10 12:30:45.619049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.619360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.619368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f1200 is same with the state(6) to be set 00:22:23.537 [2024-12-10 12:30:45.620424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.537 [2024-12-10 12:30:45.620768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.537 [2024-12-10 12:30:45.620775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.620987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.620996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-10 12:30:45.621442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.538 [2024-12-10 12:30:45.621450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.621458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.621466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.621473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.621481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.621491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.621498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193ec50 is same with the state(6) to be set 00:22:23.539 [2024-12-10 12:30:45.622545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.622987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.622994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.623003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.623010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.623018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.623026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.623034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.623042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.623050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.623057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.623066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.623074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.623082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-10 12:30:45.623089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.539 [2024-12-10 12:30:45.623099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-10 12:30:45.623602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.540 [2024-12-10 12:30:45.623610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb4b50 is same with the state(6) to be set 00:22:23.540 [2024-12-10 12:30:45.624627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:23.540 [2024-12-10 12:30:45.624646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:23.540 [2024-12-10 12:30:45.624658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:23.540 [2024-12-10 12:30:45.624670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:23.540 [2024-12-10 12:30:45.624750] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:23.540 [2024-12-10 12:30:45.624764] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:22:23.540 [2024-12-10 12:30:45.624775] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:22:23.540 [2024-12-10 12:30:45.624846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:23.540 [2024-12-10 12:30:45.624859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:23.540 task offset: 28416 on job bdev=Nvme5n1 fails 00:22:23.540 00:22:23.540 Latency(us) 00:22:23.540 [2024-12-10T11:30:45.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.540 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.540 Job: Nvme1n1 ended in about 0.91 seconds with error 00:22:23.540 Verification LBA range: start 0x0 length 0x400 00:22:23.540 Nvme1n1 : 0.91 210.60 13.16 70.20 0.00 225665.56 17210.32 227039.50 00:22:23.540 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.540 Job: Nvme2n1 ended in about 0.91 seconds with error 00:22:23.540 Verification LBA range: start 0x0 length 0x400 00:22:23.540 Nvme2n1 : 0.91 210.03 13.13 70.01 0.00 222337.34 17666.23 220656.86 00:22:23.540 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.540 Job: Nvme3n1 ended in about 0.92 seconds with error 00:22:23.540 Verification LBA range: start 0x0 length 0x400 00:22:23.540 Nvme3n1 : 0.92 209.56 13.10 69.85 0.00 218788.29 15272.74 216097.84 00:22:23.540 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.540 Job: Nvme4n1 ended in about 0.92 seconds with error 00:22:23.541 Verification LBA range: start 0x0 length 0x400 00:22:23.541 Nvme4n1 : 0.92 209.08 13.07 69.69 0.00 215336.18 14474.91 223392.28 00:22:23.541 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.541 Job: Nvme5n1 ended in about 0.90 seconds with error 00:22:23.541 Verification LBA range: start 0x0 length 0x400 00:22:23.541 Nvme5n1 : 0.90 213.23 13.33 71.08 0.00 206872.82 9630.94 217009.64 00:22:23.541 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.541 Job: Nvme6n1 ended in about 0.90 seconds with error 00:22:23.541 Verification LBA range: start 0x0 length 0x400 00:22:23.541 Nvme6n1 : 0.90 213.00 13.31 71.00 0.00 203147.35 9175.04 223392.28 00:22:23.541 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.541 Job: Nvme7n1 ended in about 0.92 seconds with error 00:22:23.541 Verification LBA range: start 0x0 length 0x400 00:22:23.541 Nvme7n1 : 0.92 212.95 13.31 69.53 0.00 200778.85 14019.01 218833.25 00:22:23.541 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.541 Job: Nvme8n1 ended in about 0.92 seconds with error 00:22:23.541 Verification LBA range: start 0x0 length 0x400 00:22:23.541 Nvme8n1 : 0.92 208.12 13.01 69.37 0.00 200466.92 15272.74 215186.03 00:22:23.541 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.541 Job: Nvme9n1 ended in about 0.92 seconds with error 00:22:23.541 Verification LBA range: start 0x0 length 0x400 00:22:23.541 Nvme9n1 : 0.92 138.43 8.65 69.21 0.00 262740.59 22909.11 240716.58 00:22:23.541 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:23.541 Job: Nvme10n1 ended in about 0.90 seconds with error 00:22:23.541 Verification LBA range: start 0x0 length 0x400 00:22:23.541 Nvme10n1 : 0.90 212.36 13.27 70.79 0.00 187970.62 4331.07 237069.36 00:22:23.541 [2024-12-10T11:30:45.709Z] =================================================================================================================== 00:22:23.541 [2024-12-10T11:30:45.709Z] Total : 2037.35 127.33 700.74 0.00 213151.39 4331.07 240716.58 00:22:23.541 [2024-12-10 12:30:45.659662] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:23.541 [2024-12-10 12:30:45.659716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:23.541 [2024-12-10 12:30:45.660064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-12-10 12:30:45.660084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5efe50 with addr=10.0.0.2, port=4420 00:22:23.541 [2024-12-10 12:30:45.660095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5efe50 is same with the state(6) to be set 00:22:23.541 [2024-12-10 12:30:45.660323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-12-10 12:30:45.660336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e3e90 with addr=10.0.0.2, port=4420 00:22:23.541 [2024-12-10 12:30:45.660344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3e90 is same with the state(6) to be set 00:22:23.541 [2024-12-10 12:30:45.660538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-12-10 12:30:45.660551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ef9c0 with addr=10.0.0.2, port=4420 00:22:23.541 [2024-12-10 12:30:45.660558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ef9c0 is same with the state(6) to be set 00:22:23.541 [2024-12-10 12:30:45.660784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-12-10 12:30:45.660797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1b280 with addr=10.0.0.2, port=4420 00:22:23.541 [2024-12-10 12:30:45.660804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1b280 is same with the state(6) to be set 00:22:23.541 [2024-12-10 12:30:45.662372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:23.541 [2024-12-10 12:30:45.662393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:23.541 [2024-12-10 12:30:45.662682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-12-10 12:30:45.662699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x504610 with addr=10.0.0.2, port=4420 00:22:23.541 [2024-12-10 12:30:45.662708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x504610 is same with the state(6) to be set 00:22:23.541 [2024-12-10 12:30:45.662954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-12-10 12:30:45.662970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa2d3d0 with addr=10.0.0.2, port=4420 00:22:23.541 [2024-12-10 12:30:45.662981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa2d3d0 is same with the state(6) to be set 00:22:23.541 [2024-12-10 12:30:45.663204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-12-10 12:30:45.663224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa579b0 with addr=10.0.0.2, port=4420 00:22:23.541 [2024-12-10 12:30:45.663233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa579b0 is same with the state(6) to be set 00:22:23.541 [2024-12-10 12:30:45.663250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5efe50 (9): Bad file descriptor 00:22:23.541 [2024-12-10 12:30:45.663263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e3e90 (9): Bad file descriptor 00:22:23.541 [2024-12-10 12:30:45.663273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ef9c0 (9): Bad file descriptor 00:22:23.541 [2024-12-10 12:30:45.663283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1b280 (9): Bad file descriptor 00:22:23.541 [2024-12-10 12:30:45.663315] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:22:23.541 [2024-12-10 12:30:45.663332] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:22:23.541 [2024-12-10 12:30:45.663342] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:23.541 [2024-12-10 12:30:45.663353] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:23.541 [2024-12-10 12:30:45.663363] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:23.541 [2024-12-10 12:30:45.663436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:23.541 [2024-12-10 12:30:45.663625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-12-10 12:30:45.663639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5e3c90 with addr=10.0.0.2, port=4420 00:22:23.541 [2024-12-10 12:30:45.663647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5e3c90 is same with the state(6) to be set 00:22:23.541 [2024-12-10 12:30:45.663867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-12-10 12:30:45.663878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa1a300 with addr=10.0.0.2, port=4420 00:22:23.541 [2024-12-10 12:30:45.663889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa1a300 is same with the state(6) to be set 00:22:23.541 [2024-12-10 12:30:45.663899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x504610 (9): Bad file descriptor 00:22:23.541 [2024-12-10 12:30:45.663910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2d3d0 (9): Bad file descriptor 00:22:23.541 [2024-12-10 12:30:45.663920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa579b0 (9): Bad file descriptor 00:22:23.541 [2024-12-10 12:30:45.663928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:23.541 [2024-12-10 12:30:45.663935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:23.541 [2024-12-10 12:30:45.663944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:23.541 [2024-12-10 12:30:45.663954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:23.541 [2024-12-10 12:30:45.663963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:23.541 [2024-12-10 12:30:45.663970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:23.541 [2024-12-10 12:30:45.663977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:23.541 [2024-12-10 12:30:45.663983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:23.541 [2024-12-10 12:30:45.663990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:23.541 [2024-12-10 12:30:45.663997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:23.541 [2024-12-10 12:30:45.664004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:23.541 [2024-12-10 12:30:45.664011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:23.541 [2024-12-10 12:30:45.664018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:23.541 [2024-12-10 12:30:45.664025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:23.541 [2024-12-10 12:30:45.664031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:23.541 [2024-12-10 12:30:45.664037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:23.541 [2024-12-10 12:30:45.664309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.541 [2024-12-10 12:30:45.664322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa57bd0 with addr=10.0.0.2, port=4420 00:22:23.541 [2024-12-10 12:30:45.664330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa57bd0 is same with the state(6) to be set 00:22:23.541 [2024-12-10 12:30:45.664339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e3c90 (9): Bad file descriptor 00:22:23.541 [2024-12-10 12:30:45.664350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa1a300 (9): Bad file descriptor 00:22:23.541 [2024-12-10 12:30:45.664359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:23.541 [2024-12-10 12:30:45.664365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:23.541 [2024-12-10 12:30:45.664372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:23.541 [2024-12-10 12:30:45.664379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:23.541 [2024-12-10 12:30:45.664390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:23.541 [2024-12-10 12:30:45.664400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:23.541 [2024-12-10 12:30:45.664407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:23.542 [2024-12-10 12:30:45.664414] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:23.542 [2024-12-10 12:30:45.664421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:23.542 [2024-12-10 12:30:45.664427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:23.542 [2024-12-10 12:30:45.664434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:23.542 [2024-12-10 12:30:45.664440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:23.542 [2024-12-10 12:30:45.664468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa57bd0 (9): Bad file descriptor 00:22:23.542 [2024-12-10 12:30:45.664478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:23.542 [2024-12-10 12:30:45.664484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:23.542 [2024-12-10 12:30:45.664491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:23.542 [2024-12-10 12:30:45.664497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:23.542 [2024-12-10 12:30:45.664504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:23.542 [2024-12-10 12:30:45.664511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:23.542 [2024-12-10 12:30:45.664518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:23.542 [2024-12-10 12:30:45.664524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:23.542 [2024-12-10 12:30:45.664551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:23.542 [2024-12-10 12:30:45.664558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:23.542 [2024-12-10 12:30:45.664566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:23.542 [2024-12-10 12:30:45.664572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:23.802 12:30:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1690183 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1690183 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1690183 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:25.178 12:30:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:25.178 rmmod nvme_tcp 00:22:25.178 rmmod nvme_fabrics 00:22:25.178 rmmod nvme_keyring 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1689901 ']' 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1689901 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1689901 ']' 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1689901 00:22:25.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (1689901) - No such process 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1689901 is not found' 00:22:25.178 Process with pid 1689901 is not found 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.178 12:30:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.082 00:22:27.082 real 0m8.270s 00:22:27.082 user 0m21.017s 00:22:27.082 sys 0m1.372s 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:27.082 ************************************ 00:22:27.082 END TEST nvmf_shutdown_tc3 00:22:27.082 ************************************ 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:27.082 ************************************ 00:22:27.082 START TEST nvmf_shutdown_tc4 00:22:27.082 ************************************ 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:27.082 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:27.082 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:27.082 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:27.083 Found net devices under 0000:86:00.0: cvl_0_0 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:27.083 Found net devices under 0000:86:00.1: cvl_0_1 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:27.083 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:27.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:22:27.342 00:22:27.342 --- 10.0.0.2 ping statistics --- 00:22:27.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.342 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:22:27.342 00:22:27.342 --- 10.0.0.1 ping statistics --- 00:22:27.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.342 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:27.342 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1691416 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1691416 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1691416 ']' 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.601 12:30:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:27.601 [2024-12-10 12:30:49.584126] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:27.601 [2024-12-10 12:30:49.584185] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.601 [2024-12-10 12:30:49.664671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:27.601 [2024-12-10 12:30:49.704247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.601 [2024-12-10 12:30:49.704288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.601 [2024-12-10 12:30:49.704295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.601 [2024-12-10 12:30:49.704304] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.601 [2024-12-10 12:30:49.704308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.601 [2024-12-10 12:30:49.705763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.601 [2024-12-10 12:30:49.705875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.601 [2024-12-10 12:30:49.705960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.601 [2024-12-10 12:30:49.705961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:28.535 [2024-12-10 12:30:50.462167] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.535 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.536 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:28.536 Malloc1 00:22:28.536 [2024-12-10 12:30:50.572096] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.536 Malloc2 00:22:28.536 Malloc3 00:22:28.536 Malloc4 00:22:28.794 Malloc5 00:22:28.794 Malloc6 00:22:28.794 Malloc7 00:22:28.794 Malloc8 00:22:28.794 Malloc9 00:22:28.794 Malloc10 00:22:29.051 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.051 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:29.051 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:29.051 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:29.051 12:30:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1691732 00:22:29.051 12:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:29.051 12:30:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:29.051 [2024-12-10 12:30:51.079561] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:34.320 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1691416 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1691416 ']' 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1691416 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1691416 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1691416' 00:22:34.321 killing process with pid 1691416 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1691416 00:22:34.321 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1691416 00:22:34.321 [2024-12-10 12:30:56.073193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d200 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.073251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d200 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.073260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d200 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.073267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d200 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.073274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d200 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.073281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d200 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.073821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d6d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.073854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d6d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.073862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d6d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.073869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d6d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.073875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6d6d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.075677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cd30 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.080605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde7d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.080628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde7d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.080637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde7d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.080652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde7d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.080659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde7d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.080666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde7d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.080672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde7d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.080678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde7d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.080683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde7d0 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.081790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf170 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.081819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf170 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.081827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf170 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.081835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf170 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.081841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf170 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.081848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf170 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.081854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf170 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.081861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf170 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.081866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdf170 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.084449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdc620 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.084474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdc620 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.084482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdc620 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.084489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdc620 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.084497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdc620 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.084503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdc620 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.084512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdc620 is same with the state(6) to be set 00:22:34.321 [2024-12-10 12:30:56.084518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cdc620 is same with the state(6) to be set 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 [2024-12-10 12:30:56.088463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 starting I/O failed: -6 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.321 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 [2024-12-10 12:30:56.089395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:34.322 [2024-12-10 12:30:56.089469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68b10 is same with the state(6) to be set 00:22:34.322 [2024-12-10 12:30:56.089490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68b10 is same with the state(6) to be set 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 [2024-12-10 12:30:56.089502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68b10 is same with the state(6) to be set 00:22:34.322 [2024-12-10 12:30:56.089510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68b10 is same with the state(6) to be set 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 [2024-12-10 12:30:56.089516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68b10 is same with the state(6) to be set 00:22:34.322 starting I/O failed: -6 00:22:34.322 [2024-12-10 12:30:56.089523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68b10 is same with the state(6) to be set 00:22:34.322 [2024-12-10 12:30:56.089530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68b10 is same with the state(6) to be set 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 [2024-12-10 12:30:56.089536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a68b10 is same with the state(6) to be set 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 [2024-12-10 12:30:56.089839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 [2024-12-10 12:30:56.089860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 [2024-12-10 12:30:56.089868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 [2024-12-10 12:30:56.089875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 [2024-12-10 12:30:56.089883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 [2024-12-10 12:30:56.089890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 starting I/O failed: -6 00:22:34.322 [2024-12-10 12:30:56.089897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 [2024-12-10 12:30:56.089903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 [2024-12-10 12:30:56.089909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 starting I/O failed: -6 00:22:34.322 [2024-12-10 12:30:56.089916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 [2024-12-10 12:30:56.089922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with the state(6) to be set 00:22:34.322 [2024-12-10 12:30:56.089929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a69000 is same with Write completed with error (sct=0, sc=8) 00:22:34.322 the state(6) to be set 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 [2024-12-10 12:30:56.090445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.322 Write completed with error (sct=0, sc=8) 00:22:34.322 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 [2024-12-10 12:30:56.092224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6aae0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.092248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6aae0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.092256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6aae0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.092264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6aae0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.092271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6aae0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.092277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6aae0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.092304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.323 NVMe io qpair process completion error 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 [2024-12-10 12:30:56.092656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6afd0 is same with the state(6) to be set 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 [2024-12-10 12:30:56.092671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6afd0 is same with starting I/O failed: -6 00:22:34.323 the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.092679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6afd0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.092687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6afd0 is same with the state(6) to be set 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 [2024-12-10 12:30:56.092693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6afd0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.092700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6afd0 is same with the state(6) to be set 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 [2024-12-10 12:30:56.092706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6afd0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.092713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6afd0 is same with the state(6) to be set 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 [2024-12-10 12:30:56.092994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6b4c0 is same with Write completed with error (sct=0, sc=8) 00:22:34.323 the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.093008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6b4c0 is same with the state(6) to be set 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 [2024-12-10 12:30:56.093015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6b4c0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.093022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6b4c0 is same with the state(6) to be set 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 [2024-12-10 12:30:56.093028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6b4c0 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.093035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6b4c0 is same with the state(6) to be set 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 [2024-12-10 12:30:56.093281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:34.323 [2024-12-10 12:30:56.093322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a610 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.093347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a610 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.093355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a610 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.093362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a610 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.093368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a610 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.093375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a610 is same with the state(6) to be set 00:22:34.323 [2024-12-10 12:30:56.093381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a610 is same with the state(6) to be set 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.323 starting I/O failed: -6 00:22:34.323 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 [2024-12-10 12:30:56.094147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:34.324 starting I/O failed: -6 00:22:34.324 starting I/O failed: -6 00:22:34.324 starting I/O failed: -6 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 [2024-12-10 12:30:56.095337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.324 Write completed with error (sct=0, sc=8) 00:22:34.324 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 [2024-12-10 12:30:56.096902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.325 NVMe io qpair process completion error 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 [2024-12-10 12:30:56.097919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 [2024-12-10 12:30:56.098819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.325 Write completed with error (sct=0, sc=8) 00:22:34.325 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 [2024-12-10 12:30:56.099861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 [2024-12-10 12:30:56.101994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.326 NVMe io qpair process completion error 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 [2024-12-10 12:30:56.102988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 starting I/O failed: -6 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.326 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 [2024-12-10 12:30:56.103899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 [2024-12-10 12:30:56.104911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.327 Write completed with error (sct=0, sc=8) 00:22:34.327 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 [2024-12-10 12:30:56.108614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.328 NVMe io qpair process completion error 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 [2024-12-10 12:30:56.109622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 [2024-12-10 12:30:56.110500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.328 starting I/O failed: -6 00:22:34.328 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 [2024-12-10 12:30:56.111532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 [2024-12-10 12:30:56.115403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.329 NVMe io qpair process completion error 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 [2024-12-10 12:30:56.116464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.329 starting I/O failed: -6 00:22:34.329 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 [2024-12-10 12:30:56.117390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 [2024-12-10 12:30:56.118390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.330 starting I/O failed: -6 00:22:34.330 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 [2024-12-10 12:30:56.120150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:34.331 NVMe io qpair process completion error 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.331 starting I/O failed: -6 00:22:34.331 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 [2024-12-10 12:30:56.125185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.332 Write completed with error (sct=0, sc=8) 00:22:34.332 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 [2024-12-10 12:30:56.126086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:34.333 starting I/O failed: -6 00:22:34.333 starting I/O failed: -6 00:22:34.333 starting I/O failed: -6 00:22:34.333 starting I/O failed: -6 00:22:34.333 starting I/O failed: -6 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 [2024-12-10 12:30:56.127274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 [2024-12-10 12:30:56.132108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.333 NVMe io qpair process completion error 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.333 starting I/O failed: -6 00:22:34.333 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 [2024-12-10 12:30:56.133171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 [2024-12-10 12:30:56.134096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 [2024-12-10 12:30:56.135093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.334 Write completed with error (sct=0, sc=8) 00:22:34.334 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 [2024-12-10 12:30:56.139150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.335 NVMe io qpair process completion error 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 [2024-12-10 12:30:56.140102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 Write completed with error (sct=0, sc=8) 00:22:34.335 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 [2024-12-10 12:30:56.141015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 [2024-12-10 12:30:56.142090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 Write completed with error (sct=0, sc=8) 00:22:34.336 starting I/O failed: -6 00:22:34.336 [2024-12-10 12:30:56.144482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:34.336 NVMe io qpair process completion error 00:22:34.336 Initializing NVMe Controllers 00:22:34.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:34.336 Controller IO queue size 128, less than required. 00:22:34.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:34.336 Controller IO queue size 128, less than required. 00:22:34.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:34.337 Controller IO queue size 128, less than required. 00:22:34.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:34.337 Controller IO queue size 128, less than required. 00:22:34.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.337 Controller IO queue size 128, less than required. 00:22:34.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:34.337 Controller IO queue size 128, less than required. 00:22:34.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:34.337 Controller IO queue size 128, less than required. 00:22:34.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:34.337 Controller IO queue size 128, less than required. 00:22:34.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:34.337 Controller IO queue size 128, less than required. 00:22:34.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.337 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:34.337 Controller IO queue size 128, less than required. 00:22:34.337 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:34.337 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:34.337 Initialization complete. Launching workers. 00:22:34.337 ======================================================== 00:22:34.337 Latency(us) 00:22:34.337 Device Information : IOPS MiB/s Average min max 00:22:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2162.87 92.94 59187.04 835.75 112438.63 00:22:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2158.16 92.73 59369.68 841.82 118210.13 00:22:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2130.33 91.54 60183.74 676.30 123052.20 00:22:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2191.12 94.15 57822.62 713.33 105523.83 00:22:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2171.00 93.29 58373.10 854.86 105443.28 00:22:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2150.66 92.41 58935.83 852.21 103465.66 00:22:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2135.46 91.76 59373.12 924.69 103289.27 00:22:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2132.47 91.63 59494.89 872.28 104104.72 00:22:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2140.39 91.97 59311.85 736.81 108594.41 00:22:34.337 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2127.76 91.43 59667.35 1179.52 111821.52 00:22:34.337 ======================================================== 00:22:34.337 Total : 21500.21 923.84 59166.74 676.30 123052.20 00:22:34.337 00:22:34.337 [2024-12-10 12:30:56.147465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61bc0 is same with the state(6) to be set 00:22:34.337 [2024-12-10 12:30:56.147511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a62740 is same with the state(6) to be set 00:22:34.337 [2024-12-10 12:30:56.147545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a62410 is same with the state(6) to be set 00:22:34.337 [2024-12-10 12:30:56.147574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63900 is same with the state(6) to be set 00:22:34.337 [2024-12-10 12:30:56.147610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63720 is same with the state(6) to be set 00:22:34.337 [2024-12-10 12:30:56.147638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61890 is same with the state(6) to be set 00:22:34.337 [2024-12-10 12:30:56.147665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61ef0 is same with the state(6) to be set 00:22:34.337 [2024-12-10 12:30:56.147692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a61560 is same with the state(6) to be set 00:22:34.337 [2024-12-10 12:30:56.147721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a62a70 is same with the state(6) to be set 00:22:34.337 [2024-12-10 12:30:56.147748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63ae0 is same with the state(6) to be set 00:22:34.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:34.337 12:30:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:35.714 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1691732 00:22:35.714 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:35.714 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1691732 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1691732 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:35.715 rmmod nvme_tcp 00:22:35.715 rmmod nvme_fabrics 00:22:35.715 rmmod nvme_keyring 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1691416 ']' 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1691416 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1691416 ']' 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1691416 00:22:35.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (1691416) - No such process 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1691416 is not found' 00:22:35.715 Process with pid 1691416 is not found 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.715 12:30:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.618 00:22:37.618 real 0m10.413s 00:22:37.618 user 0m27.581s 00:22:37.618 sys 0m5.160s 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:37.618 ************************************ 00:22:37.618 END TEST nvmf_shutdown_tc4 00:22:37.618 ************************************ 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:37.618 00:22:37.618 real 0m42.002s 00:22:37.618 user 1m45.463s 00:22:37.618 sys 0m13.927s 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:37.618 ************************************ 00:22:37.618 END TEST nvmf_shutdown 00:22:37.618 ************************************ 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:37.618 ************************************ 00:22:37.618 START TEST nvmf_nsid 00:22:37.618 ************************************ 00:22:37.618 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:37.878 * Looking for test storage... 00:22:37.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:37.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.878 --rc genhtml_branch_coverage=1 00:22:37.878 --rc genhtml_function_coverage=1 00:22:37.878 --rc genhtml_legend=1 00:22:37.878 --rc geninfo_all_blocks=1 00:22:37.878 --rc geninfo_unexecuted_blocks=1 00:22:37.878 00:22:37.878 ' 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:37.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.878 --rc genhtml_branch_coverage=1 00:22:37.878 --rc genhtml_function_coverage=1 00:22:37.878 --rc genhtml_legend=1 00:22:37.878 --rc geninfo_all_blocks=1 00:22:37.878 --rc geninfo_unexecuted_blocks=1 00:22:37.878 00:22:37.878 ' 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:37.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.878 --rc genhtml_branch_coverage=1 00:22:37.878 --rc genhtml_function_coverage=1 00:22:37.878 --rc genhtml_legend=1 00:22:37.878 --rc geninfo_all_blocks=1 00:22:37.878 --rc geninfo_unexecuted_blocks=1 00:22:37.878 00:22:37.878 ' 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:37.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:37.878 --rc genhtml_branch_coverage=1 00:22:37.878 --rc genhtml_function_coverage=1 00:22:37.878 --rc genhtml_legend=1 00:22:37.878 --rc geninfo_all_blocks=1 00:22:37.878 --rc geninfo_unexecuted_blocks=1 00:22:37.878 00:22:37.878 ' 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.878 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:37.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.879 12:30:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:44.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:44.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:44.444 Found net devices under 0000:86:00.0: cvl_0_0 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:44.444 Found net devices under 0000:86:00.1: cvl_0_1 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:44.444 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:44.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:22:44.444 00:22:44.444 --- 10.0.0.2 ping statistics --- 00:22:44.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.445 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:22:44.445 00:22:44.445 --- 10.0.0.1 ping statistics --- 00:22:44.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.445 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1696191 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1696191 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1696191 ']' 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.445 12:31:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:44.445 [2024-12-10 12:31:05.909154] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:44.445 [2024-12-10 12:31:05.909201] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.445 [2024-12-10 12:31:05.989863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.445 [2024-12-10 12:31:06.031450] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.445 [2024-12-10 12:31:06.031481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.445 [2024-12-10 12:31:06.031488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.445 [2024-12-10 12:31:06.031494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.445 [2024-12-10 12:31:06.031500] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.445 [2024-12-10 12:31:06.032033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1696213 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=14b9b4b9-bed1-430e-9768-403bcd39c1d2 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=fa098ab5-843a-4adc-8abe-4ba68d2539ed 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=e2121282-676e-4670-8bed-c618fa76de33 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:44.445 null0 00:22:44.445 null1 00:22:44.445 [2024-12-10 12:31:06.210172] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:44.445 [2024-12-10 12:31:06.210217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1696213 ] 00:22:44.445 null2 00:22:44.445 [2024-12-10 12:31:06.215371] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:44.445 [2024-12-10 12:31:06.239566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1696213 /var/tmp/tgt2.sock 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1696213 ']' 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:44.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:44.445 [2024-12-10 12:31:06.284482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.445 [2024-12-10 12:31:06.329347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:44.445 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:45.011 [2024-12-10 12:31:06.872168] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.011 [2024-12-10 12:31:06.888268] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:45.011 nvme0n1 nvme0n2 00:22:45.011 nvme1n1 00:22:45.011 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:45.011 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:45.011 12:31:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.944 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:45.944 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:45.944 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:45.944 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:45.944 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:45.944 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:45.944 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:45.944 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:45.944 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:45.944 12:31:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:45.944 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:45.944 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:45.944 12:31:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 14b9b4b9-bed1-430e-9768-403bcd39c1d2 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:46.877 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=14b9b4b9bed1430e9768403bcd39c1d2 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 14B9B4B9BED1430E9768403BCD39C1D2 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 14B9B4B9BED1430E9768403BCD39C1D2 == \1\4\B\9\B\4\B\9\B\E\D\1\4\3\0\E\9\7\6\8\4\0\3\B\C\D\3\9\C\1\D\2 ]] 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid fa098ab5-843a-4adc-8abe-4ba68d2539ed 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=fa098ab5843a4adc8abe4ba68d2539ed 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo FA098AB5843A4ADC8ABE4BA68D2539ED 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ FA098AB5843A4ADC8ABE4BA68D2539ED == \F\A\0\9\8\A\B\5\8\4\3\A\4\A\D\C\8\A\B\E\4\B\A\6\8\D\2\5\3\9\E\D ]] 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid e2121282-676e-4670-8bed-c618fa76de33 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e2121282676e46708bedc618fa76de33 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E2121282676E46708BEDC618FA76DE33 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ E2121282676E46708BEDC618FA76DE33 == \E\2\1\2\1\2\8\2\6\7\6\E\4\6\7\0\8\B\E\D\C\6\1\8\F\A\7\6\D\E\3\3 ]] 00:22:47.135 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1696213 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1696213 ']' 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1696213 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1696213 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1696213' 00:22:47.393 killing process with pid 1696213 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1696213 00:22:47.393 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1696213 00:22:47.651 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:47.651 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:47.651 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:47.651 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:47.651 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:47.651 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:47.651 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:47.651 rmmod nvme_tcp 00:22:47.651 rmmod nvme_fabrics 00:22:47.651 rmmod nvme_keyring 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1696191 ']' 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1696191 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1696191 ']' 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1696191 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1696191 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1696191' 00:22:47.909 killing process with pid 1696191 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1696191 00:22:47.909 12:31:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1696191 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.909 12:31:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.440 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.440 00:22:50.440 real 0m12.394s 00:22:50.440 user 0m9.722s 00:22:50.440 sys 0m5.489s 00:22:50.440 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.440 12:31:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:50.440 ************************************ 00:22:50.440 END TEST nvmf_nsid 00:22:50.440 ************************************ 00:22:50.440 12:31:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:50.440 00:22:50.440 real 11m59.920s 00:22:50.441 user 25m38.884s 00:22:50.441 sys 3m42.886s 00:22:50.441 12:31:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:50.441 12:31:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:50.441 ************************************ 00:22:50.441 END TEST nvmf_target_extra 00:22:50.441 ************************************ 00:22:50.441 12:31:12 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:50.441 12:31:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:50.441 12:31:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.441 12:31:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.441 ************************************ 00:22:50.441 START TEST nvmf_host 00:22:50.441 ************************************ 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:50.441 * Looking for test storage... 00:22:50.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:50.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.441 --rc genhtml_branch_coverage=1 00:22:50.441 --rc genhtml_function_coverage=1 00:22:50.441 --rc genhtml_legend=1 00:22:50.441 --rc geninfo_all_blocks=1 00:22:50.441 --rc geninfo_unexecuted_blocks=1 00:22:50.441 00:22:50.441 ' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:50.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.441 --rc genhtml_branch_coverage=1 00:22:50.441 --rc genhtml_function_coverage=1 00:22:50.441 --rc genhtml_legend=1 00:22:50.441 --rc geninfo_all_blocks=1 00:22:50.441 --rc geninfo_unexecuted_blocks=1 00:22:50.441 00:22:50.441 ' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:50.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.441 --rc genhtml_branch_coverage=1 00:22:50.441 --rc genhtml_function_coverage=1 00:22:50.441 --rc genhtml_legend=1 00:22:50.441 --rc geninfo_all_blocks=1 00:22:50.441 --rc geninfo_unexecuted_blocks=1 00:22:50.441 00:22:50.441 ' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:50.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.441 --rc genhtml_branch_coverage=1 00:22:50.441 --rc genhtml_function_coverage=1 00:22:50.441 --rc genhtml_legend=1 00:22:50.441 --rc geninfo_all_blocks=1 00:22:50.441 --rc geninfo_unexecuted_blocks=1 00:22:50.441 00:22:50.441 ' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.441 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:50.441 12:31:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.441 ************************************ 00:22:50.441 START TEST nvmf_multicontroller 00:22:50.441 ************************************ 00:22:50.442 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:50.442 * Looking for test storage... 00:22:50.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:22:50.442 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:50.442 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:50.442 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:50.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.701 --rc genhtml_branch_coverage=1 00:22:50.701 --rc genhtml_function_coverage=1 00:22:50.701 --rc genhtml_legend=1 00:22:50.701 --rc geninfo_all_blocks=1 00:22:50.701 --rc geninfo_unexecuted_blocks=1 00:22:50.701 00:22:50.701 ' 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:50.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.701 --rc genhtml_branch_coverage=1 00:22:50.701 --rc genhtml_function_coverage=1 00:22:50.701 --rc genhtml_legend=1 00:22:50.701 --rc geninfo_all_blocks=1 00:22:50.701 --rc geninfo_unexecuted_blocks=1 00:22:50.701 00:22:50.701 ' 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:50.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.701 --rc genhtml_branch_coverage=1 00:22:50.701 --rc genhtml_function_coverage=1 00:22:50.701 --rc genhtml_legend=1 00:22:50.701 --rc geninfo_all_blocks=1 00:22:50.701 --rc geninfo_unexecuted_blocks=1 00:22:50.701 00:22:50.701 ' 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:50.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.701 --rc genhtml_branch_coverage=1 00:22:50.701 --rc genhtml_function_coverage=1 00:22:50.701 --rc genhtml_legend=1 00:22:50.701 --rc geninfo_all_blocks=1 00:22:50.701 --rc geninfo_unexecuted_blocks=1 00:22:50.701 00:22:50.701 ' 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.701 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.702 12:31:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:57.269 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:57.269 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:57.269 Found net devices under 0000:86:00.0: cvl_0_0 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:57.269 Found net devices under 0000:86:00.1: cvl_0_1 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.269 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:22:57.270 00:22:57.270 --- 10.0.0.2 ping statistics --- 00:22:57.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.270 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:22:57.270 00:22:57.270 --- 10.0.0.1 ping statistics --- 00:22:57.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.270 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1700521 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1700521 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1700521 ']' 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 [2024-12-10 12:31:18.660080] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:57.270 [2024-12-10 12:31:18.660130] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.270 [2024-12-10 12:31:18.740625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:57.270 [2024-12-10 12:31:18.782267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.270 [2024-12-10 12:31:18.782303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.270 [2024-12-10 12:31:18.782310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.270 [2024-12-10 12:31:18.782316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.270 [2024-12-10 12:31:18.782321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.270 [2024-12-10 12:31:18.783696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.270 [2024-12-10 12:31:18.783806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.270 [2024-12-10 12:31:18.783807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 [2024-12-10 12:31:18.921393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 Malloc0 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 [2024-12-10 12:31:18.990884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.270 12:31:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 [2024-12-10 12:31:18.998833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:57.270 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.270 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:57.270 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.270 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 Malloc1 00:22:57.270 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.270 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:57.270 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.270 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.270 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.270 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1700555 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1700555 /var/tmp/bdevperf.sock 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1700555 ']' 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.271 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.530 NVMe0n1 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.530 1 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.530 request: 00:22:57.530 { 00:22:57.530 "name": "NVMe0", 00:22:57.530 "trtype": "tcp", 00:22:57.530 "traddr": "10.0.0.2", 00:22:57.530 "adrfam": "ipv4", 00:22:57.530 "trsvcid": "4420", 00:22:57.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.530 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:57.530 "hostaddr": "10.0.0.1", 00:22:57.530 "prchk_reftag": false, 00:22:57.530 "prchk_guard": false, 00:22:57.530 "hdgst": false, 00:22:57.530 "ddgst": false, 00:22:57.530 "allow_unrecognized_csi": false, 00:22:57.530 "method": "bdev_nvme_attach_controller", 00:22:57.530 "req_id": 1 00:22:57.530 } 00:22:57.530 Got JSON-RPC error response 00:22:57.530 response: 00:22:57.530 { 00:22:57.530 "code": -114, 00:22:57.530 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:57.530 } 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.530 request: 00:22:57.530 { 00:22:57.530 "name": "NVMe0", 00:22:57.530 "trtype": "tcp", 00:22:57.530 "traddr": "10.0.0.2", 00:22:57.530 "adrfam": "ipv4", 00:22:57.530 "trsvcid": "4420", 00:22:57.530 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:57.530 "hostaddr": "10.0.0.1", 00:22:57.530 "prchk_reftag": false, 00:22:57.530 "prchk_guard": false, 00:22:57.530 "hdgst": false, 00:22:57.530 "ddgst": false, 00:22:57.530 "allow_unrecognized_csi": false, 00:22:57.530 "method": "bdev_nvme_attach_controller", 00:22:57.530 "req_id": 1 00:22:57.530 } 00:22:57.530 Got JSON-RPC error response 00:22:57.530 response: 00:22:57.530 { 00:22:57.530 "code": -114, 00:22:57.530 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:57.530 } 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.530 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.530 request: 00:22:57.530 { 00:22:57.530 "name": "NVMe0", 00:22:57.530 "trtype": "tcp", 00:22:57.530 "traddr": "10.0.0.2", 00:22:57.530 "adrfam": "ipv4", 00:22:57.531 "trsvcid": "4420", 00:22:57.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.531 "hostaddr": "10.0.0.1", 00:22:57.531 "prchk_reftag": false, 00:22:57.531 "prchk_guard": false, 00:22:57.531 "hdgst": false, 00:22:57.531 "ddgst": false, 00:22:57.531 "multipath": "disable", 00:22:57.531 "allow_unrecognized_csi": false, 00:22:57.531 "method": "bdev_nvme_attach_controller", 00:22:57.531 "req_id": 1 00:22:57.531 } 00:22:57.531 Got JSON-RPC error response 00:22:57.531 response: 00:22:57.531 { 00:22:57.531 "code": -114, 00:22:57.531 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:57.531 } 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.531 request: 00:22:57.531 { 00:22:57.531 "name": "NVMe0", 00:22:57.531 "trtype": "tcp", 00:22:57.531 "traddr": "10.0.0.2", 00:22:57.531 "adrfam": "ipv4", 00:22:57.531 "trsvcid": "4420", 00:22:57.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.531 "hostaddr": "10.0.0.1", 00:22:57.531 "prchk_reftag": false, 00:22:57.531 "prchk_guard": false, 00:22:57.531 "hdgst": false, 00:22:57.531 "ddgst": false, 00:22:57.531 "multipath": "failover", 00:22:57.531 "allow_unrecognized_csi": false, 00:22:57.531 "method": "bdev_nvme_attach_controller", 00:22:57.531 "req_id": 1 00:22:57.531 } 00:22:57.531 Got JSON-RPC error response 00:22:57.531 response: 00:22:57.531 { 00:22:57.531 "code": -114, 00:22:57.531 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:57.531 } 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.531 NVMe0n1 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.531 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.789 00:22:57.789 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.789 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.789 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:57.789 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.789 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.789 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.789 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:57.789 12:31:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.165 { 00:22:59.165 "results": [ 00:22:59.165 { 00:22:59.165 "job": "NVMe0n1", 00:22:59.165 "core_mask": "0x1", 00:22:59.165 "workload": "write", 00:22:59.165 "status": "finished", 00:22:59.165 "queue_depth": 128, 00:22:59.165 "io_size": 4096, 00:22:59.165 "runtime": 1.006613, 00:22:59.165 "iops": 22923.40750616175, 00:22:59.165 "mibps": 89.54456057094434, 00:22:59.165 "io_failed": 0, 00:22:59.165 "io_timeout": 0, 00:22:59.165 "avg_latency_us": 5565.67127186396, 00:22:59.165 "min_latency_us": 4217.099130434783, 00:22:59.165 "max_latency_us": 14360.932173913043 00:22:59.165 } 00:22:59.165 ], 00:22:59.165 "core_count": 1 00:22:59.165 } 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1700555 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1700555 ']' 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1700555 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1700555 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1700555' 00:22:59.165 killing process with pid 1700555 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1700555 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1700555 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt -type f 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:59.165 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt --- 00:22:59.165 [2024-12-10 12:31:19.103827] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:22:59.165 [2024-12-10 12:31:19.103874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1700555 ] 00:22:59.165 [2024-12-10 12:31:19.176976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.165 [2024-12-10 12:31:19.218920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.165 [2024-12-10 12:31:19.873830] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 6366e501-cc55-4ae1-a4d0-528484023326 already exists 00:22:59.165 [2024-12-10 12:31:19.873859] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:6366e501-cc55-4ae1-a4d0-528484023326 alias for bdev NVMe1n1 00:22:59.165 [2024-12-10 12:31:19.873867] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:59.165 Running I/O for 1 seconds... 00:22:59.165 22915.00 IOPS, 89.51 MiB/s 00:22:59.165 Latency(us) 00:22:59.165 [2024-12-10T11:31:21.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.165 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:59.165 NVMe0n1 : 1.01 22923.41 89.54 0.00 0.00 5565.67 4217.10 14360.93 00:22:59.165 [2024-12-10T11:31:21.333Z] =================================================================================================================== 00:22:59.165 [2024-12-10T11:31:21.333Z] Total : 22923.41 89.54 0.00 0.00 5565.67 4217.10 14360.93 00:22:59.165 Received shutdown signal, test time was about 1.000000 seconds 00:22:59.165 00:22:59.165 Latency(us) 00:22:59.165 [2024-12-10T11:31:21.333Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.165 [2024-12-10T11:31:21.333Z] =================================================================================================================== 00:22:59.165 [2024-12-10T11:31:21.333Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:59.165 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt --- 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:59.165 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:59.165 rmmod nvme_tcp 00:22:59.165 rmmod nvme_fabrics 00:22:59.165 rmmod nvme_keyring 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1700521 ']' 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1700521 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1700521 ']' 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1700521 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1700521 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1700521' 00:22:59.424 killing process with pid 1700521 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1700521 00:22:59.424 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1700521 00:22:59.682 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:59.682 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:59.683 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:59.683 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:59.683 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:59.683 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:59.683 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:59.683 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:59.683 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:59.683 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.683 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.683 12:31:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.586 12:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.586 00:23:01.586 real 0m11.175s 00:23:01.586 user 0m12.317s 00:23:01.586 sys 0m5.279s 00:23:01.586 12:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.586 12:31:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:01.586 ************************************ 00:23:01.586 END TEST nvmf_multicontroller 00:23:01.586 ************************************ 00:23:01.586 12:31:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:01.586 12:31:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:01.586 12:31:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.586 12:31:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.586 ************************************ 00:23:01.586 START TEST nvmf_aer 00:23:01.586 ************************************ 00:23:01.586 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:01.845 * Looking for test storage... 00:23:01.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:01.845 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:01.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.846 --rc genhtml_branch_coverage=1 00:23:01.846 --rc genhtml_function_coverage=1 00:23:01.846 --rc genhtml_legend=1 00:23:01.846 --rc geninfo_all_blocks=1 00:23:01.846 --rc geninfo_unexecuted_blocks=1 00:23:01.846 00:23:01.846 ' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:01.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.846 --rc genhtml_branch_coverage=1 00:23:01.846 --rc genhtml_function_coverage=1 00:23:01.846 --rc genhtml_legend=1 00:23:01.846 --rc geninfo_all_blocks=1 00:23:01.846 --rc geninfo_unexecuted_blocks=1 00:23:01.846 00:23:01.846 ' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:01.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.846 --rc genhtml_branch_coverage=1 00:23:01.846 --rc genhtml_function_coverage=1 00:23:01.846 --rc genhtml_legend=1 00:23:01.846 --rc geninfo_all_blocks=1 00:23:01.846 --rc geninfo_unexecuted_blocks=1 00:23:01.846 00:23:01.846 ' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:01.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.846 --rc genhtml_branch_coverage=1 00:23:01.846 --rc genhtml_function_coverage=1 00:23:01.846 --rc genhtml_legend=1 00:23:01.846 --rc geninfo_all_blocks=1 00:23:01.846 --rc geninfo_unexecuted_blocks=1 00:23:01.846 00:23:01.846 ' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.846 12:31:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:08.412 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:08.412 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:08.412 Found net devices under 0000:86:00.0: cvl_0_0 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:08.412 Found net devices under 0000:86:00.1: cvl_0_1 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:08.412 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:08.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:08.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:23:08.413 00:23:08.413 --- 10.0.0.2 ping statistics --- 00:23:08.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.413 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:08.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:08.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:23:08.413 00:23:08.413 --- 10.0.0.1 ping statistics --- 00:23:08.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:08.413 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1704482 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1704482 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1704482 ']' 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:08.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.413 12:31:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.413 [2024-12-10 12:31:29.943333] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:23:08.413 [2024-12-10 12:31:29.943388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.413 [2024-12-10 12:31:30.025913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:08.413 [2024-12-10 12:31:30.072426] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.413 [2024-12-10 12:31:30.072461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.413 [2024-12-10 12:31:30.072468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.413 [2024-12-10 12:31:30.072474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.413 [2024-12-10 12:31:30.072479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.413 [2024-12-10 12:31:30.073927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.413 [2024-12-10 12:31:30.073946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.413 [2024-12-10 12:31:30.074066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:08.413 [2024-12-10 12:31:30.074067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.671 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.671 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:23:08.671 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:08.671 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.671 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.671 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.671 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:08.671 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.671 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.930 [2024-12-10 12:31:30.842226] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.930 Malloc0 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.930 [2024-12-10 12:31:30.907665] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.930 [ 00:23:08.930 { 00:23:08.930 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:08.930 "subtype": "Discovery", 00:23:08.930 "listen_addresses": [], 00:23:08.930 "allow_any_host": true, 00:23:08.930 "hosts": [] 00:23:08.930 }, 00:23:08.930 { 00:23:08.930 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.930 "subtype": "NVMe", 00:23:08.930 "listen_addresses": [ 00:23:08.930 { 00:23:08.930 "trtype": "TCP", 00:23:08.930 "adrfam": "IPv4", 00:23:08.930 "traddr": "10.0.0.2", 00:23:08.930 "trsvcid": "4420" 00:23:08.930 } 00:23:08.930 ], 00:23:08.930 "allow_any_host": true, 00:23:08.930 "hosts": [], 00:23:08.930 "serial_number": "SPDK00000000000001", 00:23:08.930 "model_number": "SPDK bdev Controller", 00:23:08.930 "max_namespaces": 2, 00:23:08.930 "min_cntlid": 1, 00:23:08.930 "max_cntlid": 65519, 00:23:08.930 "namespaces": [ 00:23:08.930 { 00:23:08.930 "nsid": 1, 00:23:08.930 "bdev_name": "Malloc0", 00:23:08.930 "name": "Malloc0", 00:23:08.930 "nguid": "B1E96DDAACC44B4784625D284AE8D501", 00:23:08.930 "uuid": "b1e96dda-acc4-4b47-8462-5d284ae8d501" 00:23:08.930 } 00:23:08.930 ] 00:23:08.930 } 00:23:08.930 ] 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1704576 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:08.930 12:31:30 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:08.930 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.930 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:23:08.930 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:23:08.930 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.189 Malloc1 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.189 Asynchronous Event Request test 00:23:09.189 Attaching to 10.0.0.2 00:23:09.189 Attached to 10.0.0.2 00:23:09.189 Registering asynchronous event callbacks... 00:23:09.189 Starting namespace attribute notice tests for all controllers... 00:23:09.189 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:09.189 aer_cb - Changed Namespace 00:23:09.189 Cleaning up... 00:23:09.189 [ 00:23:09.189 { 00:23:09.189 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:09.189 "subtype": "Discovery", 00:23:09.189 "listen_addresses": [], 00:23:09.189 "allow_any_host": true, 00:23:09.189 "hosts": [] 00:23:09.189 }, 00:23:09.189 { 00:23:09.189 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.189 "subtype": "NVMe", 00:23:09.189 "listen_addresses": [ 00:23:09.189 { 00:23:09.189 "trtype": "TCP", 00:23:09.189 "adrfam": "IPv4", 00:23:09.189 "traddr": "10.0.0.2", 00:23:09.189 "trsvcid": "4420" 00:23:09.189 } 00:23:09.189 ], 00:23:09.189 "allow_any_host": true, 00:23:09.189 "hosts": [], 00:23:09.189 "serial_number": "SPDK00000000000001", 00:23:09.189 "model_number": "SPDK bdev Controller", 00:23:09.189 "max_namespaces": 2, 00:23:09.189 "min_cntlid": 1, 00:23:09.189 "max_cntlid": 65519, 00:23:09.189 "namespaces": [ 00:23:09.189 { 00:23:09.189 "nsid": 1, 00:23:09.189 "bdev_name": "Malloc0", 00:23:09.189 "name": "Malloc0", 00:23:09.189 "nguid": "B1E96DDAACC44B4784625D284AE8D501", 00:23:09.189 "uuid": "b1e96dda-acc4-4b47-8462-5d284ae8d501" 00:23:09.189 }, 00:23:09.189 { 00:23:09.189 "nsid": 2, 00:23:09.189 "bdev_name": "Malloc1", 00:23:09.189 "name": "Malloc1", 00:23:09.189 "nguid": "CE852BAE090F4EAFA608B0E967621ACC", 00:23:09.189 "uuid": "ce852bae-090f-4eaf-a608-b0e967621acc" 00:23:09.189 } 00:23:09.189 ] 00:23:09.189 } 00:23:09.189 ] 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1704576 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.189 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.190 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.190 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.190 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.190 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:09.454 rmmod nvme_tcp 00:23:09.454 rmmod nvme_fabrics 00:23:09.454 rmmod nvme_keyring 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1704482 ']' 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1704482 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1704482 ']' 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1704482 00:23:09.454 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:23:09.455 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.455 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1704482 00:23:09.455 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:09.455 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:09.455 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1704482' 00:23:09.455 killing process with pid 1704482 00:23:09.455 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1704482 00:23:09.455 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1704482 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.734 12:31:31 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.669 12:31:33 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:11.669 00:23:11.669 real 0m9.987s 00:23:11.669 user 0m8.245s 00:23:11.669 sys 0m4.857s 00:23:11.669 12:31:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.669 12:31:33 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:11.669 ************************************ 00:23:11.669 END TEST nvmf_aer 00:23:11.669 ************************************ 00:23:11.669 12:31:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:11.669 12:31:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:11.669 12:31:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.669 12:31:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.669 ************************************ 00:23:11.669 START TEST nvmf_async_init 00:23:11.670 ************************************ 00:23:11.670 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:11.929 * Looking for test storage... 00:23:11.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:11.929 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:11.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.930 --rc genhtml_branch_coverage=1 00:23:11.930 --rc genhtml_function_coverage=1 00:23:11.930 --rc genhtml_legend=1 00:23:11.930 --rc geninfo_all_blocks=1 00:23:11.930 --rc geninfo_unexecuted_blocks=1 00:23:11.930 00:23:11.930 ' 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:11.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.930 --rc genhtml_branch_coverage=1 00:23:11.930 --rc genhtml_function_coverage=1 00:23:11.930 --rc genhtml_legend=1 00:23:11.930 --rc geninfo_all_blocks=1 00:23:11.930 --rc geninfo_unexecuted_blocks=1 00:23:11.930 00:23:11.930 ' 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:11.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.930 --rc genhtml_branch_coverage=1 00:23:11.930 --rc genhtml_function_coverage=1 00:23:11.930 --rc genhtml_legend=1 00:23:11.930 --rc geninfo_all_blocks=1 00:23:11.930 --rc geninfo_unexecuted_blocks=1 00:23:11.930 00:23:11.930 ' 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:11.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.930 --rc genhtml_branch_coverage=1 00:23:11.930 --rc genhtml_function_coverage=1 00:23:11.930 --rc genhtml_legend=1 00:23:11.930 --rc geninfo_all_blocks=1 00:23:11.930 --rc geninfo_unexecuted_blocks=1 00:23:11.930 00:23:11.930 ' 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.930 12:31:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d63e4444d1544657ba0908ec1141177f 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.930 12:31:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:18.502 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:18.502 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:18.502 Found net devices under 0000:86:00.0: cvl_0_0 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:18.502 Found net devices under 0000:86:00.1: cvl_0_1 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.502 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:23:18.503 00:23:18.503 --- 10.0.0.2 ping statistics --- 00:23:18.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.503 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:23:18.503 00:23:18.503 --- 10.0.0.1 ping statistics --- 00:23:18.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.503 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1708248 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1708248 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1708248 ']' 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:18.503 12:31:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 [2024-12-10 12:31:40.006901] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:23:18.503 [2024-12-10 12:31:40.006954] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.503 [2024-12-10 12:31:40.089965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.503 [2024-12-10 12:31:40.130228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.503 [2024-12-10 12:31:40.130266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.503 [2024-12-10 12:31:40.130272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.503 [2024-12-10 12:31:40.130279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.503 [2024-12-10 12:31:40.130285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.503 [2024-12-10 12:31:40.130828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 [2024-12-10 12:31:40.279881] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 null0 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d63e4444d1544657ba0908ec1141177f 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 [2024-12-10 12:31:40.324167] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 nvme0n1 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.503 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.503 [ 00:23:18.503 { 00:23:18.503 "name": "nvme0n1", 00:23:18.503 "aliases": [ 00:23:18.503 "d63e4444-d154-4657-ba09-08ec1141177f" 00:23:18.503 ], 00:23:18.503 "product_name": "NVMe disk", 00:23:18.503 "block_size": 512, 00:23:18.503 "num_blocks": 2097152, 00:23:18.503 "uuid": "d63e4444-d154-4657-ba09-08ec1141177f", 00:23:18.503 "numa_id": 1, 00:23:18.503 "assigned_rate_limits": { 00:23:18.503 "rw_ios_per_sec": 0, 00:23:18.503 "rw_mbytes_per_sec": 0, 00:23:18.503 "r_mbytes_per_sec": 0, 00:23:18.503 "w_mbytes_per_sec": 0 00:23:18.503 }, 00:23:18.503 "claimed": false, 00:23:18.503 "zoned": false, 00:23:18.503 "supported_io_types": { 00:23:18.503 "read": true, 00:23:18.503 "write": true, 00:23:18.503 "unmap": false, 00:23:18.503 "flush": true, 00:23:18.503 "reset": true, 00:23:18.503 "nvme_admin": true, 00:23:18.503 "nvme_io": true, 00:23:18.503 "nvme_io_md": false, 00:23:18.504 "write_zeroes": true, 00:23:18.504 "zcopy": false, 00:23:18.504 "get_zone_info": false, 00:23:18.504 "zone_management": false, 00:23:18.504 "zone_append": false, 00:23:18.504 "compare": true, 00:23:18.504 "compare_and_write": true, 00:23:18.504 "abort": true, 00:23:18.504 "seek_hole": false, 00:23:18.504 "seek_data": false, 00:23:18.504 "copy": true, 00:23:18.504 "nvme_iov_md": false 00:23:18.504 }, 00:23:18.504 "memory_domains": [ 00:23:18.504 { 00:23:18.504 "dma_device_id": "system", 00:23:18.504 "dma_device_type": 1 00:23:18.504 } 00:23:18.504 ], 00:23:18.504 "driver_specific": { 00:23:18.504 "nvme": [ 00:23:18.504 { 00:23:18.504 "trid": { 00:23:18.504 "trtype": "TCP", 00:23:18.504 "adrfam": "IPv4", 00:23:18.504 "traddr": "10.0.0.2", 00:23:18.504 "trsvcid": "4420", 00:23:18.504 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:18.504 }, 00:23:18.504 "ctrlr_data": { 00:23:18.504 "cntlid": 1, 00:23:18.504 "vendor_id": "0x8086", 00:23:18.504 "model_number": "SPDK bdev Controller", 00:23:18.504 "serial_number": "00000000000000000000", 00:23:18.504 "firmware_revision": "25.01", 00:23:18.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.504 "oacs": { 00:23:18.504 "security": 0, 00:23:18.504 "format": 0, 00:23:18.504 "firmware": 0, 00:23:18.504 "ns_manage": 0 00:23:18.504 }, 00:23:18.504 "multi_ctrlr": true, 00:23:18.504 "ana_reporting": false 00:23:18.504 }, 00:23:18.504 "vs": { 00:23:18.504 "nvme_version": "1.3" 00:23:18.504 }, 00:23:18.504 "ns_data": { 00:23:18.504 "id": 1, 00:23:18.504 "can_share": true 00:23:18.504 } 00:23:18.504 } 00:23:18.504 ], 00:23:18.504 "mp_policy": "active_passive" 00:23:18.504 } 00:23:18.504 } 00:23:18.504 ] 00:23:18.504 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.504 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:18.504 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.504 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.504 [2024-12-10 12:31:40.585637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:18.504 [2024-12-10 12:31:40.585692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1290c40 (9): Bad file descriptor 00:23:18.763 [2024-12-10 12:31:40.717234] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.763 [ 00:23:18.763 { 00:23:18.763 "name": "nvme0n1", 00:23:18.763 "aliases": [ 00:23:18.763 "d63e4444-d154-4657-ba09-08ec1141177f" 00:23:18.763 ], 00:23:18.763 "product_name": "NVMe disk", 00:23:18.763 "block_size": 512, 00:23:18.763 "num_blocks": 2097152, 00:23:18.763 "uuid": "d63e4444-d154-4657-ba09-08ec1141177f", 00:23:18.763 "numa_id": 1, 00:23:18.763 "assigned_rate_limits": { 00:23:18.763 "rw_ios_per_sec": 0, 00:23:18.763 "rw_mbytes_per_sec": 0, 00:23:18.763 "r_mbytes_per_sec": 0, 00:23:18.763 "w_mbytes_per_sec": 0 00:23:18.763 }, 00:23:18.763 "claimed": false, 00:23:18.763 "zoned": false, 00:23:18.763 "supported_io_types": { 00:23:18.763 "read": true, 00:23:18.763 "write": true, 00:23:18.763 "unmap": false, 00:23:18.763 "flush": true, 00:23:18.763 "reset": true, 00:23:18.763 "nvme_admin": true, 00:23:18.763 "nvme_io": true, 00:23:18.763 "nvme_io_md": false, 00:23:18.763 "write_zeroes": true, 00:23:18.763 "zcopy": false, 00:23:18.763 "get_zone_info": false, 00:23:18.763 "zone_management": false, 00:23:18.763 "zone_append": false, 00:23:18.763 "compare": true, 00:23:18.763 "compare_and_write": true, 00:23:18.763 "abort": true, 00:23:18.763 "seek_hole": false, 00:23:18.763 "seek_data": false, 00:23:18.763 "copy": true, 00:23:18.763 "nvme_iov_md": false 00:23:18.763 }, 00:23:18.763 "memory_domains": [ 00:23:18.763 { 00:23:18.763 "dma_device_id": "system", 00:23:18.763 "dma_device_type": 1 00:23:18.763 } 00:23:18.763 ], 00:23:18.763 "driver_specific": { 00:23:18.763 "nvme": [ 00:23:18.763 { 00:23:18.763 "trid": { 00:23:18.763 "trtype": "TCP", 00:23:18.763 "adrfam": "IPv4", 00:23:18.763 "traddr": "10.0.0.2", 00:23:18.763 "trsvcid": "4420", 00:23:18.763 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:18.763 }, 00:23:18.763 "ctrlr_data": { 00:23:18.763 "cntlid": 2, 00:23:18.763 "vendor_id": "0x8086", 00:23:18.763 "model_number": "SPDK bdev Controller", 00:23:18.763 "serial_number": "00000000000000000000", 00:23:18.763 "firmware_revision": "25.01", 00:23:18.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.763 "oacs": { 00:23:18.763 "security": 0, 00:23:18.763 "format": 0, 00:23:18.763 "firmware": 0, 00:23:18.763 "ns_manage": 0 00:23:18.763 }, 00:23:18.763 "multi_ctrlr": true, 00:23:18.763 "ana_reporting": false 00:23:18.763 }, 00:23:18.763 "vs": { 00:23:18.763 "nvme_version": "1.3" 00:23:18.763 }, 00:23:18.763 "ns_data": { 00:23:18.763 "id": 1, 00:23:18.763 "can_share": true 00:23:18.763 } 00:23:18.763 } 00:23:18.763 ], 00:23:18.763 "mp_policy": "active_passive" 00:23:18.763 } 00:23:18.763 } 00:23:18.763 ] 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.XYoXoOifQf 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.XYoXoOifQf 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.XYoXoOifQf 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.763 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.763 [2024-12-10 12:31:40.790254] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.764 [2024-12-10 12:31:40.790348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.764 [2024-12-10 12:31:40.806307] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.764 nvme0n1 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.764 [ 00:23:18.764 { 00:23:18.764 "name": "nvme0n1", 00:23:18.764 "aliases": [ 00:23:18.764 "d63e4444-d154-4657-ba09-08ec1141177f" 00:23:18.764 ], 00:23:18.764 "product_name": "NVMe disk", 00:23:18.764 "block_size": 512, 00:23:18.764 "num_blocks": 2097152, 00:23:18.764 "uuid": "d63e4444-d154-4657-ba09-08ec1141177f", 00:23:18.764 "numa_id": 1, 00:23:18.764 "assigned_rate_limits": { 00:23:18.764 "rw_ios_per_sec": 0, 00:23:18.764 "rw_mbytes_per_sec": 0, 00:23:18.764 "r_mbytes_per_sec": 0, 00:23:18.764 "w_mbytes_per_sec": 0 00:23:18.764 }, 00:23:18.764 "claimed": false, 00:23:18.764 "zoned": false, 00:23:18.764 "supported_io_types": { 00:23:18.764 "read": true, 00:23:18.764 "write": true, 00:23:18.764 "unmap": false, 00:23:18.764 "flush": true, 00:23:18.764 "reset": true, 00:23:18.764 "nvme_admin": true, 00:23:18.764 "nvme_io": true, 00:23:18.764 "nvme_io_md": false, 00:23:18.764 "write_zeroes": true, 00:23:18.764 "zcopy": false, 00:23:18.764 "get_zone_info": false, 00:23:18.764 "zone_management": false, 00:23:18.764 "zone_append": false, 00:23:18.764 "compare": true, 00:23:18.764 "compare_and_write": true, 00:23:18.764 "abort": true, 00:23:18.764 "seek_hole": false, 00:23:18.764 "seek_data": false, 00:23:18.764 "copy": true, 00:23:18.764 "nvme_iov_md": false 00:23:18.764 }, 00:23:18.764 "memory_domains": [ 00:23:18.764 { 00:23:18.764 "dma_device_id": "system", 00:23:18.764 "dma_device_type": 1 00:23:18.764 } 00:23:18.764 ], 00:23:18.764 "driver_specific": { 00:23:18.764 "nvme": [ 00:23:18.764 { 00:23:18.764 "trid": { 00:23:18.764 "trtype": "TCP", 00:23:18.764 "adrfam": "IPv4", 00:23:18.764 "traddr": "10.0.0.2", 00:23:18.764 "trsvcid": "4421", 00:23:18.764 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:18.764 }, 00:23:18.764 "ctrlr_data": { 00:23:18.764 "cntlid": 3, 00:23:18.764 "vendor_id": "0x8086", 00:23:18.764 "model_number": "SPDK bdev Controller", 00:23:18.764 "serial_number": "00000000000000000000", 00:23:18.764 "firmware_revision": "25.01", 00:23:18.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.764 "oacs": { 00:23:18.764 "security": 0, 00:23:18.764 "format": 0, 00:23:18.764 "firmware": 0, 00:23:18.764 "ns_manage": 0 00:23:18.764 }, 00:23:18.764 "multi_ctrlr": true, 00:23:18.764 "ana_reporting": false 00:23:18.764 }, 00:23:18.764 "vs": { 00:23:18.764 "nvme_version": "1.3" 00:23:18.764 }, 00:23:18.764 "ns_data": { 00:23:18.764 "id": 1, 00:23:18.764 "can_share": true 00:23:18.764 } 00:23:18.764 } 00:23:18.764 ], 00:23:18.764 "mp_policy": "active_passive" 00:23:18.764 } 00:23:18.764 } 00:23:18.764 ] 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.XYoXoOifQf 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:18.764 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:18.764 rmmod nvme_tcp 00:23:19.023 rmmod nvme_fabrics 00:23:19.023 rmmod nvme_keyring 00:23:19.023 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:19.023 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:19.023 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:19.023 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1708248 ']' 00:23:19.023 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1708248 00:23:19.023 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1708248 ']' 00:23:19.023 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1708248 00:23:19.023 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:19.023 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:19.023 12:31:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1708248 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1708248' 00:23:19.023 killing process with pid 1708248 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1708248 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1708248 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:19.023 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:19.282 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:19.282 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:19.282 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.282 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.282 12:31:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.186 12:31:43 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:21.186 00:23:21.186 real 0m9.453s 00:23:21.186 user 0m3.025s 00:23:21.186 sys 0m4.884s 00:23:21.186 12:31:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.186 12:31:43 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.186 ************************************ 00:23:21.186 END TEST nvmf_async_init 00:23:21.186 ************************************ 00:23:21.186 12:31:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:21.186 12:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.186 12:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.186 12:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.186 ************************************ 00:23:21.186 START TEST dma 00:23:21.186 ************************************ 00:23:21.186 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:21.446 * Looking for test storage... 00:23:21.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.446 --rc genhtml_branch_coverage=1 00:23:21.446 --rc genhtml_function_coverage=1 00:23:21.446 --rc genhtml_legend=1 00:23:21.446 --rc geninfo_all_blocks=1 00:23:21.446 --rc geninfo_unexecuted_blocks=1 00:23:21.446 00:23:21.446 ' 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.446 --rc genhtml_branch_coverage=1 00:23:21.446 --rc genhtml_function_coverage=1 00:23:21.446 --rc genhtml_legend=1 00:23:21.446 --rc geninfo_all_blocks=1 00:23:21.446 --rc geninfo_unexecuted_blocks=1 00:23:21.446 00:23:21.446 ' 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.446 --rc genhtml_branch_coverage=1 00:23:21.446 --rc genhtml_function_coverage=1 00:23:21.446 --rc genhtml_legend=1 00:23:21.446 --rc geninfo_all_blocks=1 00:23:21.446 --rc geninfo_unexecuted_blocks=1 00:23:21.446 00:23:21.446 ' 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.446 --rc genhtml_branch_coverage=1 00:23:21.446 --rc genhtml_function_coverage=1 00:23:21.446 --rc genhtml_legend=1 00:23:21.446 --rc geninfo_all_blocks=1 00:23:21.446 --rc geninfo_unexecuted_blocks=1 00:23:21.446 00:23:21.446 ' 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.446 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:21.447 00:23:21.447 real 0m0.211s 00:23:21.447 user 0m0.132s 00:23:21.447 sys 0m0.091s 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:21.447 ************************************ 00:23:21.447 END TEST dma 00:23:21.447 ************************************ 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.447 ************************************ 00:23:21.447 START TEST nvmf_identify 00:23:21.447 ************************************ 00:23:21.447 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:21.707 * Looking for test storage... 00:23:21.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.707 --rc genhtml_branch_coverage=1 00:23:21.707 --rc genhtml_function_coverage=1 00:23:21.707 --rc genhtml_legend=1 00:23:21.707 --rc geninfo_all_blocks=1 00:23:21.707 --rc geninfo_unexecuted_blocks=1 00:23:21.707 00:23:21.707 ' 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.707 --rc genhtml_branch_coverage=1 00:23:21.707 --rc genhtml_function_coverage=1 00:23:21.707 --rc genhtml_legend=1 00:23:21.707 --rc geninfo_all_blocks=1 00:23:21.707 --rc geninfo_unexecuted_blocks=1 00:23:21.707 00:23:21.707 ' 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.707 --rc genhtml_branch_coverage=1 00:23:21.707 --rc genhtml_function_coverage=1 00:23:21.707 --rc genhtml_legend=1 00:23:21.707 --rc geninfo_all_blocks=1 00:23:21.707 --rc geninfo_unexecuted_blocks=1 00:23:21.707 00:23:21.707 ' 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.707 --rc genhtml_branch_coverage=1 00:23:21.707 --rc genhtml_function_coverage=1 00:23:21.707 --rc genhtml_legend=1 00:23:21.707 --rc geninfo_all_blocks=1 00:23:21.707 --rc geninfo_unexecuted_blocks=1 00:23:21.707 00:23:21.707 ' 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.707 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:21.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:21.708 12:31:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:28.280 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:28.280 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:28.280 Found net devices under 0000:86:00.0: cvl_0_0 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:28.280 Found net devices under 0000:86:00.1: cvl_0_1 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.280 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:23:28.281 00:23:28.281 --- 10.0.0.2 ping statistics --- 00:23:28.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.281 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:23:28.281 00:23:28.281 --- 10.0.0.1 ping statistics --- 00:23:28.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.281 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1711930 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1711930 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1711930 ']' 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.281 12:31:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.281 [2024-12-10 12:31:49.800101] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:23:28.281 [2024-12-10 12:31:49.800154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.281 [2024-12-10 12:31:49.868431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.281 [2024-12-10 12:31:49.913211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.281 [2024-12-10 12:31:49.913247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.281 [2024-12-10 12:31:49.913254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.281 [2024-12-10 12:31:49.913260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.281 [2024-12-10 12:31:49.913267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.281 [2024-12-10 12:31:49.916178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.281 [2024-12-10 12:31:49.916223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.281 [2024-12-10 12:31:49.920214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.281 [2024-12-10 12:31:49.920214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.281 [2024-12-10 12:31:50.030100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.281 Malloc0 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.281 [2024-12-10 12:31:50.135376] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.281 [ 00:23:28.281 { 00:23:28.281 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:28.281 "subtype": "Discovery", 00:23:28.281 "listen_addresses": [ 00:23:28.281 { 00:23:28.281 "trtype": "TCP", 00:23:28.281 "adrfam": "IPv4", 00:23:28.281 "traddr": "10.0.0.2", 00:23:28.281 "trsvcid": "4420" 00:23:28.281 } 00:23:28.281 ], 00:23:28.281 "allow_any_host": true, 00:23:28.281 "hosts": [] 00:23:28.281 }, 00:23:28.281 { 00:23:28.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.281 "subtype": "NVMe", 00:23:28.281 "listen_addresses": [ 00:23:28.281 { 00:23:28.281 "trtype": "TCP", 00:23:28.281 "adrfam": "IPv4", 00:23:28.281 "traddr": "10.0.0.2", 00:23:28.281 "trsvcid": "4420" 00:23:28.281 } 00:23:28.281 ], 00:23:28.281 "allow_any_host": true, 00:23:28.281 "hosts": [], 00:23:28.281 "serial_number": "SPDK00000000000001", 00:23:28.281 "model_number": "SPDK bdev Controller", 00:23:28.281 "max_namespaces": 32, 00:23:28.281 "min_cntlid": 1, 00:23:28.281 "max_cntlid": 65519, 00:23:28.281 "namespaces": [ 00:23:28.281 { 00:23:28.281 "nsid": 1, 00:23:28.281 "bdev_name": "Malloc0", 00:23:28.281 "name": "Malloc0", 00:23:28.281 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:28.281 "eui64": "ABCDEF0123456789", 00:23:28.281 "uuid": "7f179e15-9004-4200-b74d-018dc7bbc25a" 00:23:28.281 } 00:23:28.281 ] 00:23:28.281 } 00:23:28.281 ] 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.281 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:28.282 [2024-12-10 12:31:50.188272] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:23:28.282 [2024-12-10 12:31:50.188309] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712168 ] 00:23:28.282 [2024-12-10 12:31:50.228124] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:28.282 [2024-12-10 12:31:50.232173] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:28.282 [2024-12-10 12:31:50.232180] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:28.282 [2024-12-10 12:31:50.232191] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:28.282 [2024-12-10 12:31:50.232200] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:28.282 [2024-12-10 12:31:50.232804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:28.282 [2024-12-10 12:31:50.232839] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x5db690 0 00:23:28.282 [2024-12-10 12:31:50.247168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:28.282 [2024-12-10 12:31:50.247182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:28.282 [2024-12-10 12:31:50.247187] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:28.282 [2024-12-10 12:31:50.247190] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:28.282 [2024-12-10 12:31:50.247222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.247228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.247232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5db690) 00:23:28.282 [2024-12-10 12:31:50.247248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:28.282 [2024-12-10 12:31:50.247265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d100, cid 0, qid 0 00:23:28.282 [2024-12-10 12:31:50.255169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.282 [2024-12-10 12:31:50.255177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.282 [2024-12-10 12:31:50.255180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d100) on tqpair=0x5db690 00:23:28.282 [2024-12-10 12:31:50.255196] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:28.282 [2024-12-10 12:31:50.255203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:28.282 [2024-12-10 12:31:50.255211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:28.282 [2024-12-10 12:31:50.255224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5db690) 00:23:28.282 [2024-12-10 12:31:50.255237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.282 [2024-12-10 12:31:50.255251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d100, cid 0, qid 0 00:23:28.282 [2024-12-10 12:31:50.255415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.282 [2024-12-10 12:31:50.255422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.282 [2024-12-10 12:31:50.255425] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d100) on tqpair=0x5db690 00:23:28.282 [2024-12-10 12:31:50.255433] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:28.282 [2024-12-10 12:31:50.255439] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:28.282 [2024-12-10 12:31:50.255446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5db690) 00:23:28.282 [2024-12-10 12:31:50.255458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.282 [2024-12-10 12:31:50.255468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d100, cid 0, qid 0 00:23:28.282 [2024-12-10 12:31:50.255554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.282 [2024-12-10 12:31:50.255560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.282 [2024-12-10 12:31:50.255563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d100) on tqpair=0x5db690 00:23:28.282 [2024-12-10 12:31:50.255571] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:28.282 [2024-12-10 12:31:50.255578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:28.282 [2024-12-10 12:31:50.255584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5db690) 00:23:28.282 [2024-12-10 12:31:50.255596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.282 [2024-12-10 12:31:50.255605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d100, cid 0, qid 0 00:23:28.282 [2024-12-10 12:31:50.255704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.282 [2024-12-10 12:31:50.255709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.282 [2024-12-10 12:31:50.255712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255716] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d100) on tqpair=0x5db690 00:23:28.282 [2024-12-10 12:31:50.255720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:28.282 [2024-12-10 12:31:50.255728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5db690) 00:23:28.282 [2024-12-10 12:31:50.255743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.282 [2024-12-10 12:31:50.255752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d100, cid 0, qid 0 00:23:28.282 [2024-12-10 12:31:50.255857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.282 [2024-12-10 12:31:50.255863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.282 [2024-12-10 12:31:50.255866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.255869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d100) on tqpair=0x5db690 00:23:28.282 [2024-12-10 12:31:50.255873] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:28.282 [2024-12-10 12:31:50.255878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:28.282 [2024-12-10 12:31:50.255884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:28.282 [2024-12-10 12:31:50.255992] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:28.282 [2024-12-10 12:31:50.255997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:28.282 [2024-12-10 12:31:50.256005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.256008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.256011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5db690) 00:23:28.282 [2024-12-10 12:31:50.256017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.282 [2024-12-10 12:31:50.256026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d100, cid 0, qid 0 00:23:28.282 [2024-12-10 12:31:50.256109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.282 [2024-12-10 12:31:50.256115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.282 [2024-12-10 12:31:50.256118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.256121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d100) on tqpair=0x5db690 00:23:28.282 [2024-12-10 12:31:50.256125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:28.282 [2024-12-10 12:31:50.256133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.256137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.256140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5db690) 00:23:28.282 [2024-12-10 12:31:50.256146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.282 [2024-12-10 12:31:50.256155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d100, cid 0, qid 0 00:23:28.282 [2024-12-10 12:31:50.256243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.282 [2024-12-10 12:31:50.256248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.282 [2024-12-10 12:31:50.256251] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.256254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d100) on tqpair=0x5db690 00:23:28.282 [2024-12-10 12:31:50.256259] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:28.282 [2024-12-10 12:31:50.256265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:28.282 [2024-12-10 12:31:50.256271] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:28.282 [2024-12-10 12:31:50.256282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:28.282 [2024-12-10 12:31:50.256295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.282 [2024-12-10 12:31:50.256299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5db690) 00:23:28.282 [2024-12-10 12:31:50.256305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.282 [2024-12-10 12:31:50.256315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d100, cid 0, qid 0 00:23:28.282 [2024-12-10 12:31:50.256410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.282 [2024-12-10 12:31:50.256416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.283 [2024-12-10 12:31:50.256419] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256423] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5db690): datao=0, datal=4096, cccid=0 00:23:28.283 [2024-12-10 12:31:50.256427] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x63d100) on tqpair(0x5db690): expected_datao=0, payload_size=4096 00:23:28.283 [2024-12-10 12:31:50.256431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256437] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256441] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.283 [2024-12-10 12:31:50.256500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.283 [2024-12-10 12:31:50.256503] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256506] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d100) on tqpair=0x5db690 00:23:28.283 [2024-12-10 12:31:50.256513] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:28.283 [2024-12-10 12:31:50.256518] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:28.283 [2024-12-10 12:31:50.256522] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:28.283 [2024-12-10 12:31:50.256526] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:28.283 [2024-12-10 12:31:50.256530] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:28.283 [2024-12-10 12:31:50.256534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:28.283 [2024-12-10 12:31:50.256542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:28.283 [2024-12-10 12:31:50.256548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5db690) 00:23:28.283 [2024-12-10 12:31:50.256561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.283 [2024-12-10 12:31:50.256571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d100, cid 0, qid 0 00:23:28.283 [2024-12-10 12:31:50.256645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.283 [2024-12-10 12:31:50.256651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.283 [2024-12-10 12:31:50.256654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d100) on tqpair=0x5db690 00:23:28.283 [2024-12-10 12:31:50.256664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256671] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x5db690) 00:23:28.283 [2024-12-10 12:31:50.256676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.283 [2024-12-10 12:31:50.256681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x5db690) 00:23:28.283 [2024-12-10 12:31:50.256693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.283 [2024-12-10 12:31:50.256698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x5db690) 00:23:28.283 [2024-12-10 12:31:50.256709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.283 [2024-12-10 12:31:50.256714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.283 [2024-12-10 12:31:50.256725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.283 [2024-12-10 12:31:50.256729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:28.283 [2024-12-10 12:31:50.256740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:28.283 [2024-12-10 12:31:50.256745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5db690) 00:23:28.283 [2024-12-10 12:31:50.256754] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.283 [2024-12-10 12:31:50.256765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d100, cid 0, qid 0 00:23:28.283 [2024-12-10 12:31:50.256769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d280, cid 1, qid 0 00:23:28.283 [2024-12-10 12:31:50.256773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d400, cid 2, qid 0 00:23:28.283 [2024-12-10 12:31:50.256777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.283 [2024-12-10 12:31:50.256781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d700, cid 4, qid 0 00:23:28.283 [2024-12-10 12:31:50.256900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.283 [2024-12-10 12:31:50.256906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.283 [2024-12-10 12:31:50.256908] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256912] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d700) on tqpair=0x5db690 00:23:28.283 [2024-12-10 12:31:50.256917] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:28.283 [2024-12-10 12:31:50.256923] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:28.283 [2024-12-10 12:31:50.256932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.256935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5db690) 00:23:28.283 [2024-12-10 12:31:50.256941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.283 [2024-12-10 12:31:50.256951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d700, cid 4, qid 0 00:23:28.283 [2024-12-10 12:31:50.257023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.283 [2024-12-10 12:31:50.257028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.283 [2024-12-10 12:31:50.257031] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257034] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5db690): datao=0, datal=4096, cccid=4 00:23:28.283 [2024-12-10 12:31:50.257038] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x63d700) on tqpair(0x5db690): expected_datao=0, payload_size=4096 00:23:28.283 [2024-12-10 12:31:50.257042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257068] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257072] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257149] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.283 [2024-12-10 12:31:50.257155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.283 [2024-12-10 12:31:50.257165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d700) on tqpair=0x5db690 00:23:28.283 [2024-12-10 12:31:50.257179] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:28.283 [2024-12-10 12:31:50.257200] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5db690) 00:23:28.283 [2024-12-10 12:31:50.257209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.283 [2024-12-10 12:31:50.257215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x5db690) 00:23:28.283 [2024-12-10 12:31:50.257226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.283 [2024-12-10 12:31:50.257240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d700, cid 4, qid 0 00:23:28.283 [2024-12-10 12:31:50.257245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d880, cid 5, qid 0 00:23:28.283 [2024-12-10 12:31:50.257345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.283 [2024-12-10 12:31:50.257350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.283 [2024-12-10 12:31:50.257353] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257356] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5db690): datao=0, datal=1024, cccid=4 00:23:28.283 [2024-12-10 12:31:50.257360] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x63d700) on tqpair(0x5db690): expected_datao=0, payload_size=1024 00:23:28.283 [2024-12-10 12:31:50.257364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257369] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257375] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.283 [2024-12-10 12:31:50.257385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.283 [2024-12-10 12:31:50.257387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.257391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d880) on tqpair=0x5db690 00:23:28.283 [2024-12-10 12:31:50.298300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.283 [2024-12-10 12:31:50.298312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.283 [2024-12-10 12:31:50.298315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.298319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d700) on tqpair=0x5db690 00:23:28.283 [2024-12-10 12:31:50.298330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.283 [2024-12-10 12:31:50.298334] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5db690) 00:23:28.284 [2024-12-10 12:31:50.298341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.284 [2024-12-10 12:31:50.298356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d700, cid 4, qid 0 00:23:28.284 [2024-12-10 12:31:50.298436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.284 [2024-12-10 12:31:50.298442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.284 [2024-12-10 12:31:50.298445] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.298448] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5db690): datao=0, datal=3072, cccid=4 00:23:28.284 [2024-12-10 12:31:50.298452] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x63d700) on tqpair(0x5db690): expected_datao=0, payload_size=3072 00:23:28.284 [2024-12-10 12:31:50.298456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.298462] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.298465] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.298497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.284 [2024-12-10 12:31:50.298503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.284 [2024-12-10 12:31:50.298506] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.298509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d700) on tqpair=0x5db690 00:23:28.284 [2024-12-10 12:31:50.298516] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.298520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x5db690) 00:23:28.284 [2024-12-10 12:31:50.298526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.284 [2024-12-10 12:31:50.298538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d700, cid 4, qid 0 00:23:28.284 [2024-12-10 12:31:50.298650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.284 [2024-12-10 12:31:50.298655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.284 [2024-12-10 12:31:50.298658] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.298661] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x5db690): datao=0, datal=8, cccid=4 00:23:28.284 [2024-12-10 12:31:50.298665] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x63d700) on tqpair(0x5db690): expected_datao=0, payload_size=8 00:23:28.284 [2024-12-10 12:31:50.298669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.298674] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.298677] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.343165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.284 [2024-12-10 12:31:50.343178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.284 [2024-12-10 12:31:50.343181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.284 [2024-12-10 12:31:50.343185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d700) on tqpair=0x5db690 00:23:28.284 ===================================================== 00:23:28.284 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:28.284 ===================================================== 00:23:28.284 Controller Capabilities/Features 00:23:28.284 ================================ 00:23:28.284 Vendor ID: 0000 00:23:28.284 Subsystem Vendor ID: 0000 00:23:28.284 Serial Number: .................... 00:23:28.284 Model Number: ........................................ 00:23:28.284 Firmware Version: 25.01 00:23:28.284 Recommended Arb Burst: 0 00:23:28.284 IEEE OUI Identifier: 00 00 00 00:23:28.284 Multi-path I/O 00:23:28.284 May have multiple subsystem ports: No 00:23:28.284 May have multiple controllers: No 00:23:28.284 Associated with SR-IOV VF: No 00:23:28.284 Max Data Transfer Size: 131072 00:23:28.284 Max Number of Namespaces: 0 00:23:28.284 Max Number of I/O Queues: 1024 00:23:28.284 NVMe Specification Version (VS): 1.3 00:23:28.284 NVMe Specification Version (Identify): 1.3 00:23:28.284 Maximum Queue Entries: 128 00:23:28.284 Contiguous Queues Required: Yes 00:23:28.284 Arbitration Mechanisms Supported 00:23:28.284 Weighted Round Robin: Not Supported 00:23:28.284 Vendor Specific: Not Supported 00:23:28.284 Reset Timeout: 15000 ms 00:23:28.284 Doorbell Stride: 4 bytes 00:23:28.284 NVM Subsystem Reset: Not Supported 00:23:28.284 Command Sets Supported 00:23:28.284 NVM Command Set: Supported 00:23:28.284 Boot Partition: Not Supported 00:23:28.284 Memory Page Size Minimum: 4096 bytes 00:23:28.284 Memory Page Size Maximum: 4096 bytes 00:23:28.284 Persistent Memory Region: Not Supported 00:23:28.284 Optional Asynchronous Events Supported 00:23:28.284 Namespace Attribute Notices: Not Supported 00:23:28.284 Firmware Activation Notices: Not Supported 00:23:28.284 ANA Change Notices: Not Supported 00:23:28.284 PLE Aggregate Log Change Notices: Not Supported 00:23:28.284 LBA Status Info Alert Notices: Not Supported 00:23:28.284 EGE Aggregate Log Change Notices: Not Supported 00:23:28.284 Normal NVM Subsystem Shutdown event: Not Supported 00:23:28.284 Zone Descriptor Change Notices: Not Supported 00:23:28.284 Discovery Log Change Notices: Supported 00:23:28.284 Controller Attributes 00:23:28.284 128-bit Host Identifier: Not Supported 00:23:28.284 Non-Operational Permissive Mode: Not Supported 00:23:28.284 NVM Sets: Not Supported 00:23:28.284 Read Recovery Levels: Not Supported 00:23:28.284 Endurance Groups: Not Supported 00:23:28.284 Predictable Latency Mode: Not Supported 00:23:28.284 Traffic Based Keep ALive: Not Supported 00:23:28.284 Namespace Granularity: Not Supported 00:23:28.284 SQ Associations: Not Supported 00:23:28.284 UUID List: Not Supported 00:23:28.284 Multi-Domain Subsystem: Not Supported 00:23:28.284 Fixed Capacity Management: Not Supported 00:23:28.284 Variable Capacity Management: Not Supported 00:23:28.284 Delete Endurance Group: Not Supported 00:23:28.284 Delete NVM Set: Not Supported 00:23:28.284 Extended LBA Formats Supported: Not Supported 00:23:28.284 Flexible Data Placement Supported: Not Supported 00:23:28.284 00:23:28.284 Controller Memory Buffer Support 00:23:28.284 ================================ 00:23:28.284 Supported: No 00:23:28.284 00:23:28.284 Persistent Memory Region Support 00:23:28.284 ================================ 00:23:28.284 Supported: No 00:23:28.284 00:23:28.284 Admin Command Set Attributes 00:23:28.284 ============================ 00:23:28.284 Security Send/Receive: Not Supported 00:23:28.284 Format NVM: Not Supported 00:23:28.284 Firmware Activate/Download: Not Supported 00:23:28.284 Namespace Management: Not Supported 00:23:28.284 Device Self-Test: Not Supported 00:23:28.284 Directives: Not Supported 00:23:28.284 NVMe-MI: Not Supported 00:23:28.284 Virtualization Management: Not Supported 00:23:28.284 Doorbell Buffer Config: Not Supported 00:23:28.284 Get LBA Status Capability: Not Supported 00:23:28.284 Command & Feature Lockdown Capability: Not Supported 00:23:28.284 Abort Command Limit: 1 00:23:28.284 Async Event Request Limit: 4 00:23:28.284 Number of Firmware Slots: N/A 00:23:28.284 Firmware Slot 1 Read-Only: N/A 00:23:28.284 Firmware Activation Without Reset: N/A 00:23:28.284 Multiple Update Detection Support: N/A 00:23:28.284 Firmware Update Granularity: No Information Provided 00:23:28.284 Per-Namespace SMART Log: No 00:23:28.284 Asymmetric Namespace Access Log Page: Not Supported 00:23:28.284 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:28.284 Command Effects Log Page: Not Supported 00:23:28.284 Get Log Page Extended Data: Supported 00:23:28.284 Telemetry Log Pages: Not Supported 00:23:28.284 Persistent Event Log Pages: Not Supported 00:23:28.284 Supported Log Pages Log Page: May Support 00:23:28.284 Commands Supported & Effects Log Page: Not Supported 00:23:28.284 Feature Identifiers & Effects Log Page:May Support 00:23:28.284 NVMe-MI Commands & Effects Log Page: May Support 00:23:28.284 Data Area 4 for Telemetry Log: Not Supported 00:23:28.284 Error Log Page Entries Supported: 128 00:23:28.284 Keep Alive: Not Supported 00:23:28.284 00:23:28.284 NVM Command Set Attributes 00:23:28.284 ========================== 00:23:28.284 Submission Queue Entry Size 00:23:28.284 Max: 1 00:23:28.284 Min: 1 00:23:28.284 Completion Queue Entry Size 00:23:28.284 Max: 1 00:23:28.284 Min: 1 00:23:28.284 Number of Namespaces: 0 00:23:28.285 Compare Command: Not Supported 00:23:28.285 Write Uncorrectable Command: Not Supported 00:23:28.285 Dataset Management Command: Not Supported 00:23:28.285 Write Zeroes Command: Not Supported 00:23:28.285 Set Features Save Field: Not Supported 00:23:28.285 Reservations: Not Supported 00:23:28.285 Timestamp: Not Supported 00:23:28.285 Copy: Not Supported 00:23:28.285 Volatile Write Cache: Not Present 00:23:28.285 Atomic Write Unit (Normal): 1 00:23:28.285 Atomic Write Unit (PFail): 1 00:23:28.285 Atomic Compare & Write Unit: 1 00:23:28.285 Fused Compare & Write: Supported 00:23:28.285 Scatter-Gather List 00:23:28.285 SGL Command Set: Supported 00:23:28.285 SGL Keyed: Supported 00:23:28.285 SGL Bit Bucket Descriptor: Not Supported 00:23:28.285 SGL Metadata Pointer: Not Supported 00:23:28.285 Oversized SGL: Not Supported 00:23:28.285 SGL Metadata Address: Not Supported 00:23:28.285 SGL Offset: Supported 00:23:28.285 Transport SGL Data Block: Not Supported 00:23:28.285 Replay Protected Memory Block: Not Supported 00:23:28.285 00:23:28.285 Firmware Slot Information 00:23:28.285 ========================= 00:23:28.285 Active slot: 0 00:23:28.285 00:23:28.285 00:23:28.285 Error Log 00:23:28.285 ========= 00:23:28.285 00:23:28.285 Active Namespaces 00:23:28.285 ================= 00:23:28.285 Discovery Log Page 00:23:28.285 ================== 00:23:28.285 Generation Counter: 2 00:23:28.285 Number of Records: 2 00:23:28.285 Record Format: 0 00:23:28.285 00:23:28.285 Discovery Log Entry 0 00:23:28.285 ---------------------- 00:23:28.285 Transport Type: 3 (TCP) 00:23:28.285 Address Family: 1 (IPv4) 00:23:28.285 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:28.285 Entry Flags: 00:23:28.285 Duplicate Returned Information: 1 00:23:28.285 Explicit Persistent Connection Support for Discovery: 1 00:23:28.285 Transport Requirements: 00:23:28.285 Secure Channel: Not Required 00:23:28.285 Port ID: 0 (0x0000) 00:23:28.285 Controller ID: 65535 (0xffff) 00:23:28.285 Admin Max SQ Size: 128 00:23:28.285 Transport Service Identifier: 4420 00:23:28.285 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:28.285 Transport Address: 10.0.0.2 00:23:28.285 Discovery Log Entry 1 00:23:28.285 ---------------------- 00:23:28.285 Transport Type: 3 (TCP) 00:23:28.285 Address Family: 1 (IPv4) 00:23:28.285 Subsystem Type: 2 (NVM Subsystem) 00:23:28.285 Entry Flags: 00:23:28.285 Duplicate Returned Information: 0 00:23:28.285 Explicit Persistent Connection Support for Discovery: 0 00:23:28.285 Transport Requirements: 00:23:28.285 Secure Channel: Not Required 00:23:28.285 Port ID: 0 (0x0000) 00:23:28.285 Controller ID: 65535 (0xffff) 00:23:28.285 Admin Max SQ Size: 128 00:23:28.285 Transport Service Identifier: 4420 00:23:28.285 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:28.285 Transport Address: 10.0.0.2 [2024-12-10 12:31:50.343270] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:28.285 [2024-12-10 12:31:50.343281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d100) on tqpair=0x5db690 00:23:28.285 [2024-12-10 12:31:50.343288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.285 [2024-12-10 12:31:50.343293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d280) on tqpair=0x5db690 00:23:28.285 [2024-12-10 12:31:50.343297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.285 [2024-12-10 12:31:50.343301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d400) on tqpair=0x5db690 00:23:28.285 [2024-12-10 12:31:50.343306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.285 [2024-12-10 12:31:50.343310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.285 [2024-12-10 12:31:50.343314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.285 [2024-12-10 12:31:50.343322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343328] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.285 [2024-12-10 12:31:50.343335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.285 [2024-12-10 12:31:50.343349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.285 [2024-12-10 12:31:50.343410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.285 [2024-12-10 12:31:50.343416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.285 [2024-12-10 12:31:50.343418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.285 [2024-12-10 12:31:50.343428] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.285 [2024-12-10 12:31:50.343440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.285 [2024-12-10 12:31:50.343454] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.285 [2024-12-10 12:31:50.343583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.285 [2024-12-10 12:31:50.343588] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.285 [2024-12-10 12:31:50.343591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.285 [2024-12-10 12:31:50.343599] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:28.285 [2024-12-10 12:31:50.343603] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:28.285 [2024-12-10 12:31:50.343611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.285 [2024-12-10 12:31:50.343625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.285 [2024-12-10 12:31:50.343635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.285 [2024-12-10 12:31:50.343718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.285 [2024-12-10 12:31:50.343724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.285 [2024-12-10 12:31:50.343727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.285 [2024-12-10 12:31:50.343739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.285 [2024-12-10 12:31:50.343751] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.285 [2024-12-10 12:31:50.343761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.285 [2024-12-10 12:31:50.343869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.285 [2024-12-10 12:31:50.343875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.285 [2024-12-10 12:31:50.343878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.285 [2024-12-10 12:31:50.343889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.285 [2024-12-10 12:31:50.343901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.285 [2024-12-10 12:31:50.343910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.285 [2024-12-10 12:31:50.343974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.285 [2024-12-10 12:31:50.343979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.285 [2024-12-10 12:31:50.343982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.285 [2024-12-10 12:31:50.343993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.343997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.344000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.285 [2024-12-10 12:31:50.344006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.285 [2024-12-10 12:31:50.344015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.285 [2024-12-10 12:31:50.344121] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.285 [2024-12-10 12:31:50.344126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.285 [2024-12-10 12:31:50.344129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.344132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.285 [2024-12-10 12:31:50.344141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.344144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.285 [2024-12-10 12:31:50.344147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.285 [2024-12-10 12:31:50.344154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.344170] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.344272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.344278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.344281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.344292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344299] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.344304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.344314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.344423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.344428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.344431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344434] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.344442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.344455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.344464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.344530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.344536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.344539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.344550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344557] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.344562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.344572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.344674] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.344680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.344683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344686] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.344694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.344706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.344718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.344826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.344831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.344835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344838] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.344846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.344858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.344867] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.344977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.344982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.344985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.344988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.344997] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345003] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.345009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.345019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.345085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.345090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.345093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.345104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.345117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.345125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.345229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.345235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.345238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.345249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.345261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.345273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.345385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.345390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.345393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.345405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.345417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.345427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.345531] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.345537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.345540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.345551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.345563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.345573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.345637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.345642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.345645] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.345657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.345670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.286 [2024-12-10 12:31:50.345679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.286 [2024-12-10 12:31:50.345791] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.286 [2024-12-10 12:31:50.345796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.286 [2024-12-10 12:31:50.345799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.286 [2024-12-10 12:31:50.345811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.286 [2024-12-10 12:31:50.345818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.286 [2024-12-10 12:31:50.345823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.287 [2024-12-10 12:31:50.345833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.287 [2024-12-10 12:31:50.345934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.287 [2024-12-10 12:31:50.345940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.287 [2024-12-10 12:31:50.345943] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.345946] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.287 [2024-12-10 12:31:50.345954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.345958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.345961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.287 [2024-12-10 12:31:50.345967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.287 [2024-12-10 12:31:50.345976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.287 [2024-12-10 12:31:50.346086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.287 [2024-12-10 12:31:50.346092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.287 [2024-12-10 12:31:50.346094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.346098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.287 [2024-12-10 12:31:50.346106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.346109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.346112] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.287 [2024-12-10 12:31:50.346118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.287 [2024-12-10 12:31:50.346128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.287 [2024-12-10 12:31:50.346192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.287 [2024-12-10 12:31:50.346198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.287 [2024-12-10 12:31:50.346201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.346205] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.287 [2024-12-10 12:31:50.346213] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.346216] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.346219] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.287 [2024-12-10 12:31:50.346225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.287 [2024-12-10 12:31:50.346234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.287 [2024-12-10 12:31:50.346339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.287 [2024-12-10 12:31:50.346344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.287 [2024-12-10 12:31:50.346347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.346350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.287 [2024-12-10 12:31:50.346358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.346362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.346365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.287 [2024-12-10 12:31:50.346371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.287 [2024-12-10 12:31:50.346380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.287 [2024-12-10 12:31:50.350166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.287 [2024-12-10 12:31:50.350174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.287 [2024-12-10 12:31:50.350180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.350183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.287 [2024-12-10 12:31:50.350193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.350197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.350200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x5db690) 00:23:28.287 [2024-12-10 12:31:50.350206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.287 [2024-12-10 12:31:50.350218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x63d580, cid 3, qid 0 00:23:28.287 [2024-12-10 12:31:50.350370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.287 [2024-12-10 12:31:50.350375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.287 [2024-12-10 12:31:50.350378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.350381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x63d580) on tqpair=0x5db690 00:23:28.287 [2024-12-10 12:31:50.350388] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:23:28.287 00:23:28.287 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:28.287 [2024-12-10 12:31:50.389604] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:23:28.287 [2024-12-10 12:31:50.389655] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1712173 ] 00:23:28.287 [2024-12-10 12:31:50.428810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:28.287 [2024-12-10 12:31:50.428847] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:28.287 [2024-12-10 12:31:50.428852] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:28.287 [2024-12-10 12:31:50.428863] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:28.287 [2024-12-10 12:31:50.428872] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:28.287 [2024-12-10 12:31:50.432336] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:28.287 [2024-12-10 12:31:50.432364] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11fe690 0 00:23:28.287 [2024-12-10 12:31:50.439252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:28.287 [2024-12-10 12:31:50.439266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:28.287 [2024-12-10 12:31:50.439269] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:28.287 [2024-12-10 12:31:50.439273] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:28.287 [2024-12-10 12:31:50.439297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.439302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.287 [2024-12-10 12:31:50.439306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11fe690) 00:23:28.287 [2024-12-10 12:31:50.439316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:28.287 [2024-12-10 12:31:50.439333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260100, cid 0, qid 0 00:23:28.548 [2024-12-10 12:31:50.447168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.548 [2024-12-10 12:31:50.447177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.548 [2024-12-10 12:31:50.447181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260100) on tqpair=0x11fe690 00:23:28.548 [2024-12-10 12:31:50.447194] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:28.548 [2024-12-10 12:31:50.447201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:28.548 [2024-12-10 12:31:50.447205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:28.548 [2024-12-10 12:31:50.447215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11fe690) 00:23:28.548 [2024-12-10 12:31:50.447229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.548 [2024-12-10 12:31:50.447241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260100, cid 0, qid 0 00:23:28.548 [2024-12-10 12:31:50.447420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.548 [2024-12-10 12:31:50.447426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.548 [2024-12-10 12:31:50.447429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260100) on tqpair=0x11fe690 00:23:28.548 [2024-12-10 12:31:50.447437] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:28.548 [2024-12-10 12:31:50.447443] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:28.548 [2024-12-10 12:31:50.447449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11fe690) 00:23:28.548 [2024-12-10 12:31:50.447462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.548 [2024-12-10 12:31:50.447472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260100, cid 0, qid 0 00:23:28.548 [2024-12-10 12:31:50.447534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.548 [2024-12-10 12:31:50.447540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.548 [2024-12-10 12:31:50.447543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447546] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260100) on tqpair=0x11fe690 00:23:28.548 [2024-12-10 12:31:50.447550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:28.548 [2024-12-10 12:31:50.447557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:28.548 [2024-12-10 12:31:50.447563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11fe690) 00:23:28.548 [2024-12-10 12:31:50.447575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.548 [2024-12-10 12:31:50.447585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260100, cid 0, qid 0 00:23:28.548 [2024-12-10 12:31:50.447652] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.548 [2024-12-10 12:31:50.447658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.548 [2024-12-10 12:31:50.447661] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260100) on tqpair=0x11fe690 00:23:28.548 [2024-12-10 12:31:50.447669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:28.548 [2024-12-10 12:31:50.447677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11fe690) 00:23:28.548 [2024-12-10 12:31:50.447689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.548 [2024-12-10 12:31:50.447699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260100, cid 0, qid 0 00:23:28.548 [2024-12-10 12:31:50.447759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.548 [2024-12-10 12:31:50.447765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.548 [2024-12-10 12:31:50.447768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260100) on tqpair=0x11fe690 00:23:28.548 [2024-12-10 12:31:50.447774] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:28.548 [2024-12-10 12:31:50.447779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:28.548 [2024-12-10 12:31:50.447787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:28.548 [2024-12-10 12:31:50.447894] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:28.548 [2024-12-10 12:31:50.447898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:28.548 [2024-12-10 12:31:50.447905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447908] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11fe690) 00:23:28.548 [2024-12-10 12:31:50.447917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.548 [2024-12-10 12:31:50.447928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260100, cid 0, qid 0 00:23:28.548 [2024-12-10 12:31:50.447988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.548 [2024-12-10 12:31:50.447993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.548 [2024-12-10 12:31:50.447996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.447999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260100) on tqpair=0x11fe690 00:23:28.548 [2024-12-10 12:31:50.448003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:28.548 [2024-12-10 12:31:50.448011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.448015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.448018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11fe690) 00:23:28.548 [2024-12-10 12:31:50.448024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.548 [2024-12-10 12:31:50.448036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260100, cid 0, qid 0 00:23:28.548 [2024-12-10 12:31:50.448106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.548 [2024-12-10 12:31:50.448111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.548 [2024-12-10 12:31:50.448114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.448117] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260100) on tqpair=0x11fe690 00:23:28.548 [2024-12-10 12:31:50.448121] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:28.548 [2024-12-10 12:31:50.448126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:28.548 [2024-12-10 12:31:50.448132] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:28.548 [2024-12-10 12:31:50.448139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:28.548 [2024-12-10 12:31:50.448149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.548 [2024-12-10 12:31:50.448153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11fe690) 00:23:28.548 [2024-12-10 12:31:50.448164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.548 [2024-12-10 12:31:50.448175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260100, cid 0, qid 0 00:23:28.548 [2024-12-10 12:31:50.448272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.548 [2024-12-10 12:31:50.448278] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.548 [2024-12-10 12:31:50.448281] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.448284] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11fe690): datao=0, datal=4096, cccid=0 00:23:28.549 [2024-12-10 12:31:50.448288] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1260100) on tqpair(0x11fe690): expected_datao=0, payload_size=4096 00:23:28.549 [2024-12-10 12:31:50.448292] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.448305] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.448309] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.549 [2024-12-10 12:31:50.489290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.549 [2024-12-10 12:31:50.489294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260100) on tqpair=0x11fe690 00:23:28.549 [2024-12-10 12:31:50.489306] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:28.549 [2024-12-10 12:31:50.489310] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:28.549 [2024-12-10 12:31:50.489314] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:28.549 [2024-12-10 12:31:50.489318] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:28.549 [2024-12-10 12:31:50.489322] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:28.549 [2024-12-10 12:31:50.489327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.489335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.489342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11fe690) 00:23:28.549 [2024-12-10 12:31:50.489360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.549 [2024-12-10 12:31:50.489372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260100, cid 0, qid 0 00:23:28.549 [2024-12-10 12:31:50.489442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.549 [2024-12-10 12:31:50.489448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.549 [2024-12-10 12:31:50.489451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260100) on tqpair=0x11fe690 00:23:28.549 [2024-12-10 12:31:50.489460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11fe690) 00:23:28.549 [2024-12-10 12:31:50.489472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.549 [2024-12-10 12:31:50.489477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11fe690) 00:23:28.549 [2024-12-10 12:31:50.489488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.549 [2024-12-10 12:31:50.489493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11fe690) 00:23:28.549 [2024-12-10 12:31:50.489505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.549 [2024-12-10 12:31:50.489510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11fe690) 00:23:28.549 [2024-12-10 12:31:50.489521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.549 [2024-12-10 12:31:50.489525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.489536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.489542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11fe690) 00:23:28.549 [2024-12-10 12:31:50.489551] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.549 [2024-12-10 12:31:50.489563] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260100, cid 0, qid 0 00:23:28.549 [2024-12-10 12:31:50.489568] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260280, cid 1, qid 0 00:23:28.549 [2024-12-10 12:31:50.489572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260400, cid 2, qid 0 00:23:28.549 [2024-12-10 12:31:50.489576] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260580, cid 3, qid 0 00:23:28.549 [2024-12-10 12:31:50.489582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260700, cid 4, qid 0 00:23:28.549 [2024-12-10 12:31:50.489676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.549 [2024-12-10 12:31:50.489682] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.549 [2024-12-10 12:31:50.489685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260700) on tqpair=0x11fe690 00:23:28.549 [2024-12-10 12:31:50.489693] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:28.549 [2024-12-10 12:31:50.489697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.489706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.489713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.489718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11fe690) 00:23:28.549 [2024-12-10 12:31:50.489730] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:28.549 [2024-12-10 12:31:50.489740] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260700, cid 4, qid 0 00:23:28.549 [2024-12-10 12:31:50.489803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.549 [2024-12-10 12:31:50.489808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.549 [2024-12-10 12:31:50.489811] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489814] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260700) on tqpair=0x11fe690 00:23:28.549 [2024-12-10 12:31:50.489865] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.489875] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.489882] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11fe690) 00:23:28.549 [2024-12-10 12:31:50.489891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.549 [2024-12-10 12:31:50.489900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260700, cid 4, qid 0 00:23:28.549 [2024-12-10 12:31:50.489979] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.549 [2024-12-10 12:31:50.489985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.549 [2024-12-10 12:31:50.489988] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.489991] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11fe690): datao=0, datal=4096, cccid=4 00:23:28.549 [2024-12-10 12:31:50.489996] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1260700) on tqpair(0x11fe690): expected_datao=0, payload_size=4096 00:23:28.549 [2024-12-10 12:31:50.489999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.490005] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.490009] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.490018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.549 [2024-12-10 12:31:50.490024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.549 [2024-12-10 12:31:50.490029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.490032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260700) on tqpair=0x11fe690 00:23:28.549 [2024-12-10 12:31:50.490043] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:28.549 [2024-12-10 12:31:50.490054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.490062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:28.549 [2024-12-10 12:31:50.490068] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.490071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11fe690) 00:23:28.549 [2024-12-10 12:31:50.490077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.549 [2024-12-10 12:31:50.490087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260700, cid 4, qid 0 00:23:28.549 [2024-12-10 12:31:50.490175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.549 [2024-12-10 12:31:50.490181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.549 [2024-12-10 12:31:50.490184] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.549 [2024-12-10 12:31:50.490187] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11fe690): datao=0, datal=4096, cccid=4 00:23:28.550 [2024-12-10 12:31:50.490191] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1260700) on tqpair(0x11fe690): expected_datao=0, payload_size=4096 00:23:28.550 [2024-12-10 12:31:50.490195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490201] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490204] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.550 [2024-12-10 12:31:50.490235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.550 [2024-12-10 12:31:50.490238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260700) on tqpair=0x11fe690 00:23:28.550 [2024-12-10 12:31:50.490250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:28.550 [2024-12-10 12:31:50.490259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:28.550 [2024-12-10 12:31:50.490266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11fe690) 00:23:28.550 [2024-12-10 12:31:50.490275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.550 [2024-12-10 12:31:50.490286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260700, cid 4, qid 0 00:23:28.550 [2024-12-10 12:31:50.490361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.550 [2024-12-10 12:31:50.490367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.550 [2024-12-10 12:31:50.490370] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490373] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11fe690): datao=0, datal=4096, cccid=4 00:23:28.550 [2024-12-10 12:31:50.490377] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1260700) on tqpair(0x11fe690): expected_datao=0, payload_size=4096 00:23:28.550 [2024-12-10 12:31:50.490381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490388] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490392] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490405] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.550 [2024-12-10 12:31:50.490411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.550 [2024-12-10 12:31:50.490414] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260700) on tqpair=0x11fe690 00:23:28.550 [2024-12-10 12:31:50.490426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:28.550 [2024-12-10 12:31:50.490433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:28.550 [2024-12-10 12:31:50.490440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:28.550 [2024-12-10 12:31:50.490445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:28.550 [2024-12-10 12:31:50.490450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:28.550 [2024-12-10 12:31:50.490454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:28.550 [2024-12-10 12:31:50.490459] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:28.550 [2024-12-10 12:31:50.490463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:28.550 [2024-12-10 12:31:50.490468] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:28.550 [2024-12-10 12:31:50.490480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11fe690) 00:23:28.550 [2024-12-10 12:31:50.490489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.550 [2024-12-10 12:31:50.490495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11fe690) 00:23:28.550 [2024-12-10 12:31:50.490506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.550 [2024-12-10 12:31:50.490519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260700, cid 4, qid 0 00:23:28.550 [2024-12-10 12:31:50.490523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260880, cid 5, qid 0 00:23:28.550 [2024-12-10 12:31:50.490614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.550 [2024-12-10 12:31:50.490619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.550 [2024-12-10 12:31:50.490622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260700) on tqpair=0x11fe690 00:23:28.550 [2024-12-10 12:31:50.490631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.550 [2024-12-10 12:31:50.490636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.550 [2024-12-10 12:31:50.490639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260880) on tqpair=0x11fe690 00:23:28.550 [2024-12-10 12:31:50.490651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11fe690) 00:23:28.550 [2024-12-10 12:31:50.490661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.550 [2024-12-10 12:31:50.490671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260880, cid 5, qid 0 00:23:28.550 [2024-12-10 12:31:50.490736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.550 [2024-12-10 12:31:50.490742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.550 [2024-12-10 12:31:50.490745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260880) on tqpair=0x11fe690 00:23:28.550 [2024-12-10 12:31:50.490756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11fe690) 00:23:28.550 [2024-12-10 12:31:50.490765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.550 [2024-12-10 12:31:50.490775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260880, cid 5, qid 0 00:23:28.550 [2024-12-10 12:31:50.490841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.550 [2024-12-10 12:31:50.490847] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.550 [2024-12-10 12:31:50.490850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260880) on tqpair=0x11fe690 00:23:28.550 [2024-12-10 12:31:50.490862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11fe690) 00:23:28.550 [2024-12-10 12:31:50.490871] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.550 [2024-12-10 12:31:50.490880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260880, cid 5, qid 0 00:23:28.550 [2024-12-10 12:31:50.490941] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.550 [2024-12-10 12:31:50.490946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.550 [2024-12-10 12:31:50.490949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260880) on tqpair=0x11fe690 00:23:28.550 [2024-12-10 12:31:50.490965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11fe690) 00:23:28.550 [2024-12-10 12:31:50.490974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.550 [2024-12-10 12:31:50.490980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490983] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11fe690) 00:23:28.550 [2024-12-10 12:31:50.490989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.550 [2024-12-10 12:31:50.490995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.490998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11fe690) 00:23:28.550 [2024-12-10 12:31:50.491003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.550 [2024-12-10 12:31:50.491010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.491015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11fe690) 00:23:28.550 [2024-12-10 12:31:50.491020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.550 [2024-12-10 12:31:50.491032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260880, cid 5, qid 0 00:23:28.550 [2024-12-10 12:31:50.491037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260700, cid 4, qid 0 00:23:28.550 [2024-12-10 12:31:50.491040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260a00, cid 6, qid 0 00:23:28.550 [2024-12-10 12:31:50.491045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260b80, cid 7, qid 0 00:23:28.550 [2024-12-10 12:31:50.495169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.550 [2024-12-10 12:31:50.495177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.550 [2024-12-10 12:31:50.495180] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.495183] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11fe690): datao=0, datal=8192, cccid=5 00:23:28.550 [2024-12-10 12:31:50.495187] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1260880) on tqpair(0x11fe690): expected_datao=0, payload_size=8192 00:23:28.550 [2024-12-10 12:31:50.495191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.495197] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.495200] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.550 [2024-12-10 12:31:50.495205] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.550 [2024-12-10 12:31:50.495210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.551 [2024-12-10 12:31:50.495213] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495216] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11fe690): datao=0, datal=512, cccid=4 00:23:28.551 [2024-12-10 12:31:50.495220] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1260700) on tqpair(0x11fe690): expected_datao=0, payload_size=512 00:23:28.551 [2024-12-10 12:31:50.495223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495229] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495232] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.551 [2024-12-10 12:31:50.495241] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.551 [2024-12-10 12:31:50.495244] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495247] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11fe690): datao=0, datal=512, cccid=6 00:23:28.551 [2024-12-10 12:31:50.495251] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1260a00) on tqpair(0x11fe690): expected_datao=0, payload_size=512 00:23:28.551 [2024-12-10 12:31:50.495255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495260] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495263] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:28.551 [2024-12-10 12:31:50.495273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:28.551 [2024-12-10 12:31:50.495276] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495279] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11fe690): datao=0, datal=4096, cccid=7 00:23:28.551 [2024-12-10 12:31:50.495282] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1260b80) on tqpair(0x11fe690): expected_datao=0, payload_size=4096 00:23:28.551 [2024-12-10 12:31:50.495286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495291] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495299] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495304] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.551 [2024-12-10 12:31:50.495309] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.551 [2024-12-10 12:31:50.495312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260880) on tqpair=0x11fe690 00:23:28.551 [2024-12-10 12:31:50.495326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.551 [2024-12-10 12:31:50.495331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.551 [2024-12-10 12:31:50.495334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260700) on tqpair=0x11fe690 00:23:28.551 [2024-12-10 12:31:50.495345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.551 [2024-12-10 12:31:50.495350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.551 [2024-12-10 12:31:50.495353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260a00) on tqpair=0x11fe690 00:23:28.551 [2024-12-10 12:31:50.495362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.551 [2024-12-10 12:31:50.495367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.551 [2024-12-10 12:31:50.495370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.551 [2024-12-10 12:31:50.495373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260b80) on tqpair=0x11fe690 00:23:28.551 ===================================================== 00:23:28.551 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.551 ===================================================== 00:23:28.551 Controller Capabilities/Features 00:23:28.551 ================================ 00:23:28.551 Vendor ID: 8086 00:23:28.551 Subsystem Vendor ID: 8086 00:23:28.551 Serial Number: SPDK00000000000001 00:23:28.551 Model Number: SPDK bdev Controller 00:23:28.551 Firmware Version: 25.01 00:23:28.551 Recommended Arb Burst: 6 00:23:28.551 IEEE OUI Identifier: e4 d2 5c 00:23:28.551 Multi-path I/O 00:23:28.551 May have multiple subsystem ports: Yes 00:23:28.551 May have multiple controllers: Yes 00:23:28.551 Associated with SR-IOV VF: No 00:23:28.551 Max Data Transfer Size: 131072 00:23:28.551 Max Number of Namespaces: 32 00:23:28.551 Max Number of I/O Queues: 127 00:23:28.551 NVMe Specification Version (VS): 1.3 00:23:28.551 NVMe Specification Version (Identify): 1.3 00:23:28.551 Maximum Queue Entries: 128 00:23:28.551 Contiguous Queues Required: Yes 00:23:28.551 Arbitration Mechanisms Supported 00:23:28.551 Weighted Round Robin: Not Supported 00:23:28.551 Vendor Specific: Not Supported 00:23:28.551 Reset Timeout: 15000 ms 00:23:28.551 Doorbell Stride: 4 bytes 00:23:28.551 NVM Subsystem Reset: Not Supported 00:23:28.551 Command Sets Supported 00:23:28.551 NVM Command Set: Supported 00:23:28.551 Boot Partition: Not Supported 00:23:28.551 Memory Page Size Minimum: 4096 bytes 00:23:28.551 Memory Page Size Maximum: 4096 bytes 00:23:28.551 Persistent Memory Region: Not Supported 00:23:28.551 Optional Asynchronous Events Supported 00:23:28.551 Namespace Attribute Notices: Supported 00:23:28.551 Firmware Activation Notices: Not Supported 00:23:28.551 ANA Change Notices: Not Supported 00:23:28.551 PLE Aggregate Log Change Notices: Not Supported 00:23:28.551 LBA Status Info Alert Notices: Not Supported 00:23:28.551 EGE Aggregate Log Change Notices: Not Supported 00:23:28.551 Normal NVM Subsystem Shutdown event: Not Supported 00:23:28.551 Zone Descriptor Change Notices: Not Supported 00:23:28.551 Discovery Log Change Notices: Not Supported 00:23:28.551 Controller Attributes 00:23:28.551 128-bit Host Identifier: Supported 00:23:28.551 Non-Operational Permissive Mode: Not Supported 00:23:28.551 NVM Sets: Not Supported 00:23:28.551 Read Recovery Levels: Not Supported 00:23:28.551 Endurance Groups: Not Supported 00:23:28.551 Predictable Latency Mode: Not Supported 00:23:28.551 Traffic Based Keep ALive: Not Supported 00:23:28.551 Namespace Granularity: Not Supported 00:23:28.551 SQ Associations: Not Supported 00:23:28.551 UUID List: Not Supported 00:23:28.551 Multi-Domain Subsystem: Not Supported 00:23:28.551 Fixed Capacity Management: Not Supported 00:23:28.551 Variable Capacity Management: Not Supported 00:23:28.551 Delete Endurance Group: Not Supported 00:23:28.551 Delete NVM Set: Not Supported 00:23:28.551 Extended LBA Formats Supported: Not Supported 00:23:28.551 Flexible Data Placement Supported: Not Supported 00:23:28.551 00:23:28.551 Controller Memory Buffer Support 00:23:28.551 ================================ 00:23:28.551 Supported: No 00:23:28.551 00:23:28.551 Persistent Memory Region Support 00:23:28.551 ================================ 00:23:28.551 Supported: No 00:23:28.551 00:23:28.551 Admin Command Set Attributes 00:23:28.551 ============================ 00:23:28.551 Security Send/Receive: Not Supported 00:23:28.551 Format NVM: Not Supported 00:23:28.551 Firmware Activate/Download: Not Supported 00:23:28.551 Namespace Management: Not Supported 00:23:28.551 Device Self-Test: Not Supported 00:23:28.551 Directives: Not Supported 00:23:28.551 NVMe-MI: Not Supported 00:23:28.551 Virtualization Management: Not Supported 00:23:28.551 Doorbell Buffer Config: Not Supported 00:23:28.551 Get LBA Status Capability: Not Supported 00:23:28.551 Command & Feature Lockdown Capability: Not Supported 00:23:28.551 Abort Command Limit: 4 00:23:28.551 Async Event Request Limit: 4 00:23:28.551 Number of Firmware Slots: N/A 00:23:28.551 Firmware Slot 1 Read-Only: N/A 00:23:28.551 Firmware Activation Without Reset: N/A 00:23:28.551 Multiple Update Detection Support: N/A 00:23:28.551 Firmware Update Granularity: No Information Provided 00:23:28.551 Per-Namespace SMART Log: No 00:23:28.551 Asymmetric Namespace Access Log Page: Not Supported 00:23:28.551 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:28.551 Command Effects Log Page: Supported 00:23:28.551 Get Log Page Extended Data: Supported 00:23:28.551 Telemetry Log Pages: Not Supported 00:23:28.551 Persistent Event Log Pages: Not Supported 00:23:28.551 Supported Log Pages Log Page: May Support 00:23:28.551 Commands Supported & Effects Log Page: Not Supported 00:23:28.551 Feature Identifiers & Effects Log Page:May Support 00:23:28.551 NVMe-MI Commands & Effects Log Page: May Support 00:23:28.551 Data Area 4 for Telemetry Log: Not Supported 00:23:28.551 Error Log Page Entries Supported: 128 00:23:28.551 Keep Alive: Supported 00:23:28.551 Keep Alive Granularity: 10000 ms 00:23:28.551 00:23:28.551 NVM Command Set Attributes 00:23:28.551 ========================== 00:23:28.551 Submission Queue Entry Size 00:23:28.551 Max: 64 00:23:28.551 Min: 64 00:23:28.551 Completion Queue Entry Size 00:23:28.551 Max: 16 00:23:28.551 Min: 16 00:23:28.551 Number of Namespaces: 32 00:23:28.551 Compare Command: Supported 00:23:28.551 Write Uncorrectable Command: Not Supported 00:23:28.551 Dataset Management Command: Supported 00:23:28.551 Write Zeroes Command: Supported 00:23:28.551 Set Features Save Field: Not Supported 00:23:28.551 Reservations: Supported 00:23:28.551 Timestamp: Not Supported 00:23:28.551 Copy: Supported 00:23:28.551 Volatile Write Cache: Present 00:23:28.551 Atomic Write Unit (Normal): 1 00:23:28.551 Atomic Write Unit (PFail): 1 00:23:28.552 Atomic Compare & Write Unit: 1 00:23:28.552 Fused Compare & Write: Supported 00:23:28.552 Scatter-Gather List 00:23:28.552 SGL Command Set: Supported 00:23:28.552 SGL Keyed: Supported 00:23:28.552 SGL Bit Bucket Descriptor: Not Supported 00:23:28.552 SGL Metadata Pointer: Not Supported 00:23:28.552 Oversized SGL: Not Supported 00:23:28.552 SGL Metadata Address: Not Supported 00:23:28.552 SGL Offset: Supported 00:23:28.552 Transport SGL Data Block: Not Supported 00:23:28.552 Replay Protected Memory Block: Not Supported 00:23:28.552 00:23:28.552 Firmware Slot Information 00:23:28.552 ========================= 00:23:28.552 Active slot: 1 00:23:28.552 Slot 1 Firmware Revision: 25.01 00:23:28.552 00:23:28.552 00:23:28.552 Commands Supported and Effects 00:23:28.552 ============================== 00:23:28.552 Admin Commands 00:23:28.552 -------------- 00:23:28.552 Get Log Page (02h): Supported 00:23:28.552 Identify (06h): Supported 00:23:28.552 Abort (08h): Supported 00:23:28.552 Set Features (09h): Supported 00:23:28.552 Get Features (0Ah): Supported 00:23:28.552 Asynchronous Event Request (0Ch): Supported 00:23:28.552 Keep Alive (18h): Supported 00:23:28.552 I/O Commands 00:23:28.552 ------------ 00:23:28.552 Flush (00h): Supported LBA-Change 00:23:28.552 Write (01h): Supported LBA-Change 00:23:28.552 Read (02h): Supported 00:23:28.552 Compare (05h): Supported 00:23:28.552 Write Zeroes (08h): Supported LBA-Change 00:23:28.552 Dataset Management (09h): Supported LBA-Change 00:23:28.552 Copy (19h): Supported LBA-Change 00:23:28.552 00:23:28.552 Error Log 00:23:28.552 ========= 00:23:28.552 00:23:28.552 Arbitration 00:23:28.552 =========== 00:23:28.552 Arbitration Burst: 1 00:23:28.552 00:23:28.552 Power Management 00:23:28.552 ================ 00:23:28.552 Number of Power States: 1 00:23:28.552 Current Power State: Power State #0 00:23:28.552 Power State #0: 00:23:28.552 Max Power: 0.00 W 00:23:28.552 Non-Operational State: Operational 00:23:28.552 Entry Latency: Not Reported 00:23:28.552 Exit Latency: Not Reported 00:23:28.552 Relative Read Throughput: 0 00:23:28.552 Relative Read Latency: 0 00:23:28.552 Relative Write Throughput: 0 00:23:28.552 Relative Write Latency: 0 00:23:28.552 Idle Power: Not Reported 00:23:28.552 Active Power: Not Reported 00:23:28.552 Non-Operational Permissive Mode: Not Supported 00:23:28.552 00:23:28.552 Health Information 00:23:28.552 ================== 00:23:28.552 Critical Warnings: 00:23:28.552 Available Spare Space: OK 00:23:28.552 Temperature: OK 00:23:28.552 Device Reliability: OK 00:23:28.552 Read Only: No 00:23:28.552 Volatile Memory Backup: OK 00:23:28.552 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:28.552 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:28.552 Available Spare: 0% 00:23:28.552 Available Spare Threshold: 0% 00:23:28.552 Life Percentage Used:[2024-12-10 12:31:50.495455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.495460] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11fe690) 00:23:28.552 [2024-12-10 12:31:50.495466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.552 [2024-12-10 12:31:50.495479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260b80, cid 7, qid 0 00:23:28.552 [2024-12-10 12:31:50.495638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.552 [2024-12-10 12:31:50.495644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.552 [2024-12-10 12:31:50.495647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.495650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260b80) on tqpair=0x11fe690 00:23:28.552 [2024-12-10 12:31:50.495678] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:28.552 [2024-12-10 12:31:50.495688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260100) on tqpair=0x11fe690 00:23:28.552 [2024-12-10 12:31:50.495694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.552 [2024-12-10 12:31:50.495699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260280) on tqpair=0x11fe690 00:23:28.552 [2024-12-10 12:31:50.495703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.552 [2024-12-10 12:31:50.495707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260400) on tqpair=0x11fe690 00:23:28.552 [2024-12-10 12:31:50.495711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.552 [2024-12-10 12:31:50.495715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260580) on tqpair=0x11fe690 00:23:28.552 [2024-12-10 12:31:50.495719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.552 [2024-12-10 12:31:50.495726] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.495729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.495733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11fe690) 00:23:28.552 [2024-12-10 12:31:50.495739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.552 [2024-12-10 12:31:50.495752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260580, cid 3, qid 0 00:23:28.552 [2024-12-10 12:31:50.495814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.552 [2024-12-10 12:31:50.495820] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.552 [2024-12-10 12:31:50.495823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.495826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260580) on tqpair=0x11fe690 00:23:28.552 [2024-12-10 12:31:50.495832] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.495835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.495838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11fe690) 00:23:28.552 [2024-12-10 12:31:50.495844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.552 [2024-12-10 12:31:50.495856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260580, cid 3, qid 0 00:23:28.552 [2024-12-10 12:31:50.495930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.552 [2024-12-10 12:31:50.495935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.552 [2024-12-10 12:31:50.495938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.495942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260580) on tqpair=0x11fe690 00:23:28.552 [2024-12-10 12:31:50.495946] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:28.552 [2024-12-10 12:31:50.495950] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:28.552 [2024-12-10 12:31:50.495958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.495961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.495964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11fe690) 00:23:28.552 [2024-12-10 12:31:50.495970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.552 [2024-12-10 12:31:50.495979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260580, cid 3, qid 0 00:23:28.552 [2024-12-10 12:31:50.496044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.552 [2024-12-10 12:31:50.496049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.552 [2024-12-10 12:31:50.496052] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.496056] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260580) on tqpair=0x11fe690 00:23:28.552 [2024-12-10 12:31:50.496064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.496067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.552 [2024-12-10 12:31:50.496070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11fe690) 00:23:28.552 [2024-12-10 12:31:50.496076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.552 [2024-12-10 12:31:50.496085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260580, cid 3, qid 0 00:23:28.552 [2024-12-10 12:31:50.496145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.552 [2024-12-10 12:31:50.496151] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.553 [2024-12-10 12:31:50.496154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260580) on tqpair=0x11fe690 00:23:28.553 [2024-12-10 12:31:50.496184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11fe690) 00:23:28.553 [2024-12-10 12:31:50.496197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.553 [2024-12-10 12:31:50.496207] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260580, cid 3, qid 0 00:23:28.553 [2024-12-10 12:31:50.496273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.553 [2024-12-10 12:31:50.496279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.553 [2024-12-10 12:31:50.496282] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260580) on tqpair=0x11fe690 00:23:28.553 [2024-12-10 12:31:50.496293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496300] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11fe690) 00:23:28.553 [2024-12-10 12:31:50.496305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.553 [2024-12-10 12:31:50.496315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260580, cid 3, qid 0 00:23:28.553 [2024-12-10 12:31:50.496392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.553 [2024-12-10 12:31:50.496397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.553 [2024-12-10 12:31:50.496400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260580) on tqpair=0x11fe690 00:23:28.553 [2024-12-10 12:31:50.496411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496418] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11fe690) 00:23:28.553 [2024-12-10 12:31:50.496423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.553 [2024-12-10 12:31:50.496433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260580, cid 3, qid 0 00:23:28.553 [2024-12-10 12:31:50.496495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.553 [2024-12-10 12:31:50.496501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.553 [2024-12-10 12:31:50.496504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260580) on tqpair=0x11fe690 00:23:28.553 [2024-12-10 12:31:50.496516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.496523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11fe690) 00:23:28.553 [2024-12-10 12:31:50.496528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.553 [2024-12-10 12:31:50.496538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260580, cid 3, qid 0 00:23:28.553 [2024-12-10 12:31:50.500165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.553 [2024-12-10 12:31:50.500173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.553 [2024-12-10 12:31:50.500176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.500180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260580) on tqpair=0x11fe690 00:23:28.553 [2024-12-10 12:31:50.500189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.500194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.500198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11fe690) 00:23:28.553 [2024-12-10 12:31:50.500203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.553 [2024-12-10 12:31:50.500215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1260580, cid 3, qid 0 00:23:28.553 [2024-12-10 12:31:50.500365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:28.553 [2024-12-10 12:31:50.500371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:28.553 [2024-12-10 12:31:50.500374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:28.553 [2024-12-10 12:31:50.500377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1260580) on tqpair=0x11fe690 00:23:28.553 [2024-12-10 12:31:50.500384] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:23:28.553 0% 00:23:28.553 Data Units Read: 0 00:23:28.553 Data Units Written: 0 00:23:28.553 Host Read Commands: 0 00:23:28.553 Host Write Commands: 0 00:23:28.553 Controller Busy Time: 0 minutes 00:23:28.553 Power Cycles: 0 00:23:28.553 Power On Hours: 0 hours 00:23:28.553 Unsafe Shutdowns: 0 00:23:28.553 Unrecoverable Media Errors: 0 00:23:28.553 Lifetime Error Log Entries: 0 00:23:28.553 Warning Temperature Time: 0 minutes 00:23:28.553 Critical Temperature Time: 0 minutes 00:23:28.553 00:23:28.553 Number of Queues 00:23:28.553 ================ 00:23:28.553 Number of I/O Submission Queues: 127 00:23:28.553 Number of I/O Completion Queues: 127 00:23:28.553 00:23:28.553 Active Namespaces 00:23:28.553 ================= 00:23:28.553 Namespace ID:1 00:23:28.553 Error Recovery Timeout: Unlimited 00:23:28.553 Command Set Identifier: NVM (00h) 00:23:28.553 Deallocate: Supported 00:23:28.553 Deallocated/Unwritten Error: Not Supported 00:23:28.553 Deallocated Read Value: Unknown 00:23:28.553 Deallocate in Write Zeroes: Not Supported 00:23:28.553 Deallocated Guard Field: 0xFFFF 00:23:28.553 Flush: Supported 00:23:28.553 Reservation: Supported 00:23:28.553 Namespace Sharing Capabilities: Multiple Controllers 00:23:28.553 Size (in LBAs): 131072 (0GiB) 00:23:28.553 Capacity (in LBAs): 131072 (0GiB) 00:23:28.553 Utilization (in LBAs): 131072 (0GiB) 00:23:28.553 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:28.553 EUI64: ABCDEF0123456789 00:23:28.553 UUID: 7f179e15-9004-4200-b74d-018dc7bbc25a 00:23:28.553 Thin Provisioning: Not Supported 00:23:28.553 Per-NS Atomic Units: Yes 00:23:28.553 Atomic Boundary Size (Normal): 0 00:23:28.553 Atomic Boundary Size (PFail): 0 00:23:28.553 Atomic Boundary Offset: 0 00:23:28.553 Maximum Single Source Range Length: 65535 00:23:28.553 Maximum Copy Length: 65535 00:23:28.553 Maximum Source Range Count: 1 00:23:28.553 NGUID/EUI64 Never Reused: No 00:23:28.553 Namespace Write Protected: No 00:23:28.553 Number of LBA Formats: 1 00:23:28.553 Current LBA Format: LBA Format #00 00:23:28.553 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:28.553 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:28.553 rmmod nvme_tcp 00:23:28.553 rmmod nvme_fabrics 00:23:28.553 rmmod nvme_keyring 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1711930 ']' 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1711930 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1711930 ']' 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1711930 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1711930 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1711930' 00:23:28.553 killing process with pid 1711930 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1711930 00:23:28.553 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1711930 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.812 12:31:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.717 12:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:30.976 00:23:30.976 real 0m9.279s 00:23:30.976 user 0m5.371s 00:23:30.976 sys 0m4.838s 00:23:30.976 12:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:30.976 12:31:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:30.976 ************************************ 00:23:30.976 END TEST nvmf_identify 00:23:30.976 ************************************ 00:23:30.976 12:31:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:30.976 12:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:30.976 12:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:30.976 12:31:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.976 ************************************ 00:23:30.976 START TEST nvmf_perf 00:23:30.976 ************************************ 00:23:30.976 12:31:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:30.976 * Looking for test storage... 00:23:30.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:30.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.976 --rc genhtml_branch_coverage=1 00:23:30.976 --rc genhtml_function_coverage=1 00:23:30.976 --rc genhtml_legend=1 00:23:30.976 --rc geninfo_all_blocks=1 00:23:30.976 --rc geninfo_unexecuted_blocks=1 00:23:30.976 00:23:30.976 ' 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:30.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.976 --rc genhtml_branch_coverage=1 00:23:30.976 --rc genhtml_function_coverage=1 00:23:30.976 --rc genhtml_legend=1 00:23:30.976 --rc geninfo_all_blocks=1 00:23:30.976 --rc geninfo_unexecuted_blocks=1 00:23:30.976 00:23:30.976 ' 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:30.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.976 --rc genhtml_branch_coverage=1 00:23:30.976 --rc genhtml_function_coverage=1 00:23:30.976 --rc genhtml_legend=1 00:23:30.976 --rc geninfo_all_blocks=1 00:23:30.976 --rc geninfo_unexecuted_blocks=1 00:23:30.976 00:23:30.976 ' 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:30.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.976 --rc genhtml_branch_coverage=1 00:23:30.976 --rc genhtml_function_coverage=1 00:23:30.976 --rc genhtml_legend=1 00:23:30.976 --rc geninfo_all_blocks=1 00:23:30.976 --rc geninfo_unexecuted_blocks=1 00:23:30.976 00:23:30.976 ' 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.976 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:31.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:31.236 12:31:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:37.801 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:37.801 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:37.801 Found net devices under 0000:86:00.0: cvl_0_0 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:37.801 Found net devices under 0000:86:00.1: cvl_0_1 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:37.801 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:37.802 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:37.802 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:37.802 12:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:37.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:23:37.802 00:23:37.802 --- 10.0.0.2 ping statistics --- 00:23:37.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.802 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:37.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:23:37.802 00:23:37.802 --- 10.0.0.1 ping statistics --- 00:23:37.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.802 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1715692 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1715692 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1715692 ']' 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.802 [2024-12-10 12:31:59.248370] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:23:37.802 [2024-12-10 12:31:59.248418] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.802 [2024-12-10 12:31:59.329077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:37.802 [2024-12-10 12:31:59.369748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.802 [2024-12-10 12:31:59.369787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.802 [2024-12-10 12:31:59.369799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.802 [2024-12-10 12:31:59.369805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.802 [2024-12-10 12:31:59.369810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.802 [2024-12-10 12:31:59.371344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.802 [2024-12-10 12:31:59.371455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.802 [2024-12-10 12:31:59.371539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.802 [2024-12-10 12:31:59.371541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:23:37.802 12:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py load_subsystem_config 00:23:41.092 12:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py framework_get_config bdev 00:23:41.092 12:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:41.092 12:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:41.092 12:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:41.092 12:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:41.092 12:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:41.092 12:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:41.092 12:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:41.092 12:32:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:41.093 [2024-12-10 12:32:03.175560] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.093 12:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.350 12:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:41.350 12:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.608 12:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:41.608 12:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:41.866 12:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.866 [2024-12-10 12:32:03.978605] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.866 12:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:42.124 12:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:42.124 12:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:42.124 12:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:42.124 12:32:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:43.496 Initializing NVMe Controllers 00:23:43.496 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:43.496 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:43.496 Initialization complete. Launching workers. 00:23:43.496 ======================================================== 00:23:43.496 Latency(us) 00:23:43.496 Device Information : IOPS MiB/s Average min max 00:23:43.496 PCIE (0000:5e:00.0) NSID 1 from core 0: 97470.16 380.74 327.65 37.21 4311.90 00:23:43.496 ======================================================== 00:23:43.496 Total : 97470.16 380.74 327.65 37.21 4311.90 00:23:43.496 00:23:43.496 12:32:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:44.868 Initializing NVMe Controllers 00:23:44.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:44.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:44.868 Initialization complete. Launching workers. 00:23:44.868 ======================================================== 00:23:44.868 Latency(us) 00:23:44.868 Device Information : IOPS MiB/s Average min max 00:23:44.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 130.00 0.51 7929.57 102.28 44938.50 00:23:44.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 53.00 0.21 19239.34 7186.16 47887.69 00:23:44.868 ======================================================== 00:23:44.868 Total : 183.00 0.71 11205.08 102.28 47887.69 00:23:44.868 00:23:44.868 12:32:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.241 Initializing NVMe Controllers 00:23:46.241 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:46.241 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:46.241 Initialization complete. Launching workers. 00:23:46.241 ======================================================== 00:23:46.241 Latency(us) 00:23:46.241 Device Information : IOPS MiB/s Average min max 00:23:46.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10904.59 42.60 2934.14 512.04 9141.18 00:23:46.241 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3800.86 14.85 8478.54 7127.59 16666.17 00:23:46.241 ======================================================== 00:23:46.241 Total : 14705.44 57.44 4367.18 512.04 16666.17 00:23:46.241 00:23:46.241 12:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:46.241 12:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:46.241 12:32:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:48.769 Initializing NVMe Controllers 00:23:48.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.769 Controller IO queue size 128, less than required. 00:23:48.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.769 Controller IO queue size 128, less than required. 00:23:48.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:48.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:48.769 Initialization complete. Launching workers. 00:23:48.769 ======================================================== 00:23:48.769 Latency(us) 00:23:48.769 Device Information : IOPS MiB/s Average min max 00:23:48.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1783.99 446.00 72910.31 50925.54 133733.55 00:23:48.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.00 150.00 217783.46 79206.05 308874.09 00:23:48.769 ======================================================== 00:23:48.769 Total : 2383.99 596.00 109371.67 50925.54 308874.09 00:23:48.769 00:23:48.769 12:32:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:48.769 No valid NVMe controllers or AIO or URING devices found 00:23:48.769 Initializing NVMe Controllers 00:23:48.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.769 Controller IO queue size 128, less than required. 00:23:48.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.769 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:48.769 Controller IO queue size 128, less than required. 00:23:48.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.769 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:48.769 WARNING: Some requested NVMe devices were skipped 00:23:48.769 12:32:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:52.052 Initializing NVMe Controllers 00:23:52.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:52.052 Controller IO queue size 128, less than required. 00:23:52.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:52.052 Controller IO queue size 128, less than required. 00:23:52.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:52.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:52.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:52.052 Initialization complete. Launching workers. 00:23:52.052 00:23:52.052 ==================== 00:23:52.052 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:52.052 TCP transport: 00:23:52.052 polls: 11561 00:23:52.052 idle_polls: 7974 00:23:52.052 sock_completions: 3587 00:23:52.052 nvme_completions: 5995 00:23:52.052 submitted_requests: 9014 00:23:52.052 queued_requests: 1 00:23:52.052 00:23:52.052 ==================== 00:23:52.052 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:52.052 TCP transport: 00:23:52.052 polls: 11316 00:23:52.052 idle_polls: 7359 00:23:52.052 sock_completions: 3957 00:23:52.052 nvme_completions: 6559 00:23:52.052 submitted_requests: 9862 00:23:52.052 queued_requests: 1 00:23:52.052 ======================================================== 00:23:52.052 Latency(us) 00:23:52.052 Device Information : IOPS MiB/s Average min max 00:23:52.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1497.69 374.42 88056.16 47058.28 165465.96 00:23:52.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1638.61 409.65 78836.87 46197.43 109587.95 00:23:52.052 ======================================================== 00:23:52.052 Total : 3136.30 784.07 83239.39 46197.43 165465.96 00:23:52.052 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.052 rmmod nvme_tcp 00:23:52.052 rmmod nvme_fabrics 00:23:52.052 rmmod nvme_keyring 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1715692 ']' 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1715692 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1715692 ']' 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1715692 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715692 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715692' 00:23:52.052 killing process with pid 1715692 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1715692 00:23:52.052 12:32:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1715692 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.426 12:32:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.330 12:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:55.330 00:23:55.330 real 0m24.444s 00:23:55.330 user 1m3.523s 00:23:55.330 sys 0m8.253s 00:23:55.330 12:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.330 12:32:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:55.330 ************************************ 00:23:55.330 END TEST nvmf_perf 00:23:55.330 ************************************ 00:23:55.330 12:32:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:55.330 12:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:55.330 12:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:55.330 12:32:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.330 ************************************ 00:23:55.330 START TEST nvmf_fio_host 00:23:55.330 ************************************ 00:23:55.330 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:55.590 * Looking for test storage... 00:23:55.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:55.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.590 --rc genhtml_branch_coverage=1 00:23:55.590 --rc genhtml_function_coverage=1 00:23:55.590 --rc genhtml_legend=1 00:23:55.590 --rc geninfo_all_blocks=1 00:23:55.590 --rc geninfo_unexecuted_blocks=1 00:23:55.590 00:23:55.590 ' 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:55.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.590 --rc genhtml_branch_coverage=1 00:23:55.590 --rc genhtml_function_coverage=1 00:23:55.590 --rc genhtml_legend=1 00:23:55.590 --rc geninfo_all_blocks=1 00:23:55.590 --rc geninfo_unexecuted_blocks=1 00:23:55.590 00:23:55.590 ' 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:55.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.590 --rc genhtml_branch_coverage=1 00:23:55.590 --rc genhtml_function_coverage=1 00:23:55.590 --rc genhtml_legend=1 00:23:55.590 --rc geninfo_all_blocks=1 00:23:55.590 --rc geninfo_unexecuted_blocks=1 00:23:55.590 00:23:55.590 ' 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:55.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.590 --rc genhtml_branch_coverage=1 00:23:55.590 --rc genhtml_function_coverage=1 00:23:55.590 --rc genhtml_legend=1 00:23:55.590 --rc geninfo_all_blocks=1 00:23:55.590 --rc geninfo_unexecuted_blocks=1 00:23:55.590 00:23:55.590 ' 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.590 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.591 12:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:02.160 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.160 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:02.161 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:02.161 Found net devices under 0000:86:00.0: cvl_0_0 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:02.161 Found net devices under 0000:86:00.1: cvl_0_1 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:24:02.161 00:24:02.161 --- 10.0.0.2 ping statistics --- 00:24:02.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.161 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:24:02.161 00:24:02.161 --- 10.0.0.1 ping statistics --- 00:24:02.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.161 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1721799 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1721799 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1721799 ']' 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.161 [2024-12-10 12:32:23.635913] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:02.161 [2024-12-10 12:32:23.635960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.161 [2024-12-10 12:32:23.714494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.161 [2024-12-10 12:32:23.756260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.161 [2024-12-10 12:32:23.756294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.161 [2024-12-10 12:32:23.756303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.161 [2024-12-10 12:32:23.756309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.161 [2024-12-10 12:32:23.756314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.161 [2024-12-10 12:32:23.757907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.161 [2024-12-10 12:32:23.758016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.161 [2024-12-10 12:32:23.758121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.161 [2024-12-10 12:32:23.758122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:24:02.161 12:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:02.161 [2024-12-10 12:32:24.032375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.161 12:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:02.161 12:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.161 12:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.161 12:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:02.161 Malloc1 00:24:02.420 12:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:02.420 12:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:02.679 12:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.937 [2024-12-10 12:32:24.906843] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.937 12:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme' 00:24:03.195 12:32:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:03.454 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:03.454 fio-3.35 00:24:03.454 Starting 1 thread 00:24:05.985 00:24:05.985 test: (groupid=0, jobs=1): err= 0: pid=1722195: Tue Dec 10 12:32:27 2024 00:24:05.985 read: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(91.6MiB/2005msec) 00:24:05.985 slat (nsec): min=1583, max=237959, avg=1733.79, stdev=2203.01 00:24:05.985 clat (usec): min=3129, max=9867, avg=6066.39, stdev=444.01 00:24:05.985 lat (usec): min=3162, max=9869, avg=6068.12, stdev=443.95 00:24:05.985 clat percentiles (usec): 00:24:05.985 | 1.00th=[ 5014], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5735], 00:24:05.985 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:24:05.985 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6587], 95.00th=[ 6783], 00:24:05.985 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 7963], 99.95th=[ 8979], 00:24:05.985 | 99.99th=[ 9896] 00:24:05.985 bw ( KiB/s): min=46000, max=47376, per=99.96%, avg=46754.00, stdev=584.03, samples=4 00:24:05.985 iops : min=11500, max=11844, avg=11688.50, stdev=146.01, samples=4 00:24:05.985 write: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.9MiB/2005msec); 0 zone resets 00:24:05.985 slat (nsec): min=1611, max=220962, avg=1786.37, stdev=1622.89 00:24:05.985 clat (usec): min=2427, max=9779, avg=4885.77, stdev=379.11 00:24:05.985 lat (usec): min=2443, max=9781, avg=4887.56, stdev=379.11 00:24:05.985 clat percentiles (usec): 00:24:05.985 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4621], 00:24:05.985 | 30.00th=[ 4686], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 4948], 00:24:05.985 | 70.00th=[ 5080], 80.00th=[ 5145], 90.00th=[ 5342], 95.00th=[ 5473], 00:24:05.985 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 7963], 99.95th=[ 9110], 00:24:05.985 | 99.99th=[ 9765] 00:24:05.985 bw ( KiB/s): min=46032, max=46848, per=99.99%, avg=46422.00, stdev=338.36, samples=4 00:24:05.985 iops : min=11508, max=11712, avg=11605.50, stdev=84.59, samples=4 00:24:05.985 lat (msec) : 4=0.39%, 10=99.61% 00:24:05.985 cpu : usr=73.95%, sys=25.10%, ctx=79, majf=0, minf=2 00:24:05.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:05.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.986 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:05.986 issued rwts: total=23444,23271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.986 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:05.986 00:24:05.986 Run status group 0 (all jobs): 00:24:05.986 READ: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=91.6MiB (96.0MB), run=2005-2005msec 00:24:05.986 WRITE: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.9MiB (95.3MB), run=2005-2005msec 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme' 00:24:05.986 12:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:05.986 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:05.986 fio-3.35 00:24:05.986 Starting 1 thread 00:24:08.518 00:24:08.518 test: (groupid=0, jobs=1): err= 0: pid=1722746: Tue Dec 10 12:32:30 2024 00:24:08.518 read: IOPS=10.9k, BW=170MiB/s (178MB/s)(342MiB/2007msec) 00:24:08.518 slat (nsec): min=2557, max=90275, avg=2851.67, stdev=1310.72 00:24:08.518 clat (usec): min=1672, max=12639, avg=6756.68, stdev=1570.30 00:24:08.518 lat (usec): min=1675, max=12641, avg=6759.53, stdev=1570.40 00:24:08.518 clat percentiles (usec): 00:24:08.518 | 1.00th=[ 3621], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5342], 00:24:08.518 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6783], 60.00th=[ 7242], 00:24:08.518 | 70.00th=[ 7635], 80.00th=[ 7963], 90.00th=[ 8717], 95.00th=[ 9241], 00:24:08.518 | 99.00th=[11076], 99.50th=[11600], 99.90th=[12125], 99.95th=[12125], 00:24:08.518 | 99.99th=[12256] 00:24:08.518 bw ( KiB/s): min=83232, max=98112, per=50.59%, avg=88176.00, stdev=6767.80, samples=4 00:24:08.518 iops : min= 5202, max= 6132, avg=5511.00, stdev=422.99, samples=4 00:24:08.518 write: IOPS=6461, BW=101MiB/s (106MB/s)(181MiB/1790msec); 0 zone resets 00:24:08.518 slat (usec): min=29, max=259, avg=31.82, stdev= 5.87 00:24:08.518 clat (usec): min=3277, max=15038, avg=8663.59, stdev=1568.85 00:24:08.518 lat (usec): min=3308, max=15069, avg=8695.41, stdev=1569.57 00:24:08.518 clat percentiles (usec): 00:24:08.518 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6849], 20.00th=[ 7308], 00:24:08.518 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:24:08.518 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10814], 95.00th=[11600], 00:24:08.518 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14484], 99.95th=[14746], 00:24:08.518 | 99.99th=[14877] 00:24:08.518 bw ( KiB/s): min=86784, max=102400, per=89.08%, avg=92096.00, stdev=7017.90, samples=4 00:24:08.518 iops : min= 5424, max= 6400, avg=5756.00, stdev=438.62, samples=4 00:24:08.518 lat (msec) : 2=0.05%, 4=1.70%, 10=89.93%, 20=8.32% 00:24:08.518 cpu : usr=87.74%, sys=11.52%, ctx=33, majf=0, minf=2 00:24:08.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:08.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:08.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:08.518 issued rwts: total=21862,11566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:08.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:08.518 00:24:08.518 Run status group 0 (all jobs): 00:24:08.518 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=342MiB (358MB), run=2007-2007msec 00:24:08.518 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=181MiB (189MB), run=1790-1790msec 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:08.518 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:08.518 rmmod nvme_tcp 00:24:08.518 rmmod nvme_fabrics 00:24:08.777 rmmod nvme_keyring 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1721799 ']' 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1721799 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1721799 ']' 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1721799 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1721799 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1721799' 00:24:08.777 killing process with pid 1721799 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1721799 00:24:08.777 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1721799 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.036 12:32:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.941 12:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:10.941 00:24:10.941 real 0m15.572s 00:24:10.941 user 0m46.192s 00:24:10.941 sys 0m6.410s 00:24:10.941 12:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.941 12:32:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.941 ************************************ 00:24:10.941 END TEST nvmf_fio_host 00:24:10.941 ************************************ 00:24:10.941 12:32:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:10.941 12:32:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.941 12:32:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.941 12:32:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.201 ************************************ 00:24:11.201 START TEST nvmf_failover 00:24:11.201 ************************************ 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:11.201 * Looking for test storage... 00:24:11.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.201 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.202 --rc genhtml_branch_coverage=1 00:24:11.202 --rc genhtml_function_coverage=1 00:24:11.202 --rc genhtml_legend=1 00:24:11.202 --rc geninfo_all_blocks=1 00:24:11.202 --rc geninfo_unexecuted_blocks=1 00:24:11.202 00:24:11.202 ' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.202 --rc genhtml_branch_coverage=1 00:24:11.202 --rc genhtml_function_coverage=1 00:24:11.202 --rc genhtml_legend=1 00:24:11.202 --rc geninfo_all_blocks=1 00:24:11.202 --rc geninfo_unexecuted_blocks=1 00:24:11.202 00:24:11.202 ' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.202 --rc genhtml_branch_coverage=1 00:24:11.202 --rc genhtml_function_coverage=1 00:24:11.202 --rc genhtml_legend=1 00:24:11.202 --rc geninfo_all_blocks=1 00:24:11.202 --rc geninfo_unexecuted_blocks=1 00:24:11.202 00:24:11.202 ' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:11.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.202 --rc genhtml_branch_coverage=1 00:24:11.202 --rc genhtml_function_coverage=1 00:24:11.202 --rc genhtml_legend=1 00:24:11.202 --rc geninfo_all_blocks=1 00:24:11.202 --rc geninfo_unexecuted_blocks=1 00:24:11.202 00:24:11.202 ' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.202 12:32:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:17.829 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:17.829 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.829 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:17.830 Found net devices under 0000:86:00.0: cvl_0_0 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:17.830 Found net devices under 0000:86:00.1: cvl_0_1 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.830 12:32:38 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:24:17.830 00:24:17.830 --- 10.0.0.2 ping statistics --- 00:24:17.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.830 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:24:17.830 00:24:17.830 --- 10.0.0.1 ping statistics --- 00:24:17.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.830 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1726726 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1726726 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1726726 ']' 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.830 [2024-12-10 12:32:39.356955] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:17.830 [2024-12-10 12:32:39.357007] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.830 [2024-12-10 12:32:39.437544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:17.830 [2024-12-10 12:32:39.479659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.830 [2024-12-10 12:32:39.479695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.830 [2024-12-10 12:32:39.479702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.830 [2024-12-10 12:32:39.479708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.830 [2024-12-10 12:32:39.479714] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.830 [2024-12-10 12:32:39.481059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.830 [2024-12-10 12:32:39.481185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.830 [2024-12-10 12:32:39.481185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:17.830 [2024-12-10 12:32:39.783063] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.830 12:32:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:18.089 Malloc0 00:24:18.089 12:32:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.348 12:32:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:18.348 12:32:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.606 [2024-12-10 12:32:40.644963] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.606 12:32:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:18.865 [2024-12-10 12:32:40.837501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:18.865 12:32:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:18.865 [2024-12-10 12:32:41.026112] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:19.124 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1726990 00:24:19.124 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:19.124 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.124 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1726990 /var/tmp/bdevperf.sock 00:24:19.124 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1726990 ']' 00:24:19.124 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.124 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.124 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.124 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.124 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:19.382 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.382 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:19.382 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:19.640 NVMe0n1 00:24:19.640 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:19.898 00:24:19.899 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:19.899 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1727217 00:24:19.899 12:32:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:20.832 12:32:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.091 12:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:24.375 12:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:24.375 00:24:24.375 12:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:24.634 [2024-12-10 12:32:46.615667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 [2024-12-10 12:32:46.615944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b30d80 is same with the state(6) to be set 00:24:24.634 12:32:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:27.916 12:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.916 [2024-12-10 12:32:49.840538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.916 12:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:28.849 12:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:29.106 [2024-12-10 12:32:51.054900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.106 [2024-12-10 12:32:51.054936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.054944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.054951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.054957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.054963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.054970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.054975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.054981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.054987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.054993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.054999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 [2024-12-10 12:32:51.055133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d1a0 is same with the state(6) to be set 00:24:29.107 12:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1727217 00:24:35.673 { 00:24:35.673 "results": [ 00:24:35.673 { 00:24:35.673 "job": "NVMe0n1", 00:24:35.673 "core_mask": "0x1", 00:24:35.673 "workload": "verify", 00:24:35.673 "status": "finished", 00:24:35.673 "verify_range": { 00:24:35.673 "start": 0, 00:24:35.673 "length": 16384 00:24:35.673 }, 00:24:35.673 "queue_depth": 128, 00:24:35.673 "io_size": 4096, 00:24:35.673 "runtime": 15.008897, 00:24:35.673 "iops": 10990.947569298396, 00:24:35.673 "mibps": 42.93338894257186, 00:24:35.673 "io_failed": 5861, 00:24:35.673 "io_timeout": 0, 00:24:35.673 "avg_latency_us": 11223.753596768991, 00:24:35.673 "min_latency_us": 432.7513043478261, 00:24:35.673 "max_latency_us": 17552.250434782607 00:24:35.673 } 00:24:35.673 ], 00:24:35.673 "core_count": 1 00:24:35.673 } 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1726990 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1726990 ']' 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1726990 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1726990 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1726990' 00:24:35.674 killing process with pid 1726990 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1726990 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1726990 00:24:35.674 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:24:35.674 [2024-12-10 12:32:41.100684] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:35.674 [2024-12-10 12:32:41.100735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1726990 ] 00:24:35.674 [2024-12-10 12:32:41.167437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.674 [2024-12-10 12:32:41.208033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.674 Running I/O for 15 seconds... 00:24:35.674 11375.00 IOPS, 44.43 MiB/s [2024-12-10T11:32:57.842Z] [2024-12-10 12:32:43.114041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.674 [2024-12-10 12:32:43.114083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.674 [2024-12-10 12:32:43.114101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.674 [2024-12-10 12:32:43.114116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.674 [2024-12-10 12:32:43.114129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa60fe0 is same with the state(6) to be set 00:24:35.674 [2024-12-10 12:32:43.114190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.674 [2024-12-10 12:32:43.114655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.674 [2024-12-10 12:32:43.114662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.675 [2024-12-10 12:32:43.114677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.675 [2024-12-10 12:32:43.114691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.675 [2024-12-10 12:32:43.114708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.675 [2024-12-10 12:32:43.114722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.675 [2024-12-10 12:32:43.114737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.675 [2024-12-10 12:32:43.114751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.675 [2024-12-10 12:32:43.114766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.675 [2024-12-10 12:32:43.114782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.675 [2024-12-10 12:32:43.114797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.675 [2024-12-10 12:32:43.114811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.114990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.114998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.675 [2024-12-10 12:32:43.115281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.675 [2024-12-10 12:32:43.115289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.676 [2024-12-10 12:32:43.115347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.676 [2024-12-10 12:32:43.115866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.676 [2024-12-10 12:32:43.115874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:43.115881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.115889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:43.115895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.115903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:43.115909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.115918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:43.115925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.115934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.115940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.115948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.115955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.115963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.115969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.115977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.115984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.115992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.115999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.116012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.116028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.116043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.116057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.116071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.116085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.116104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.116119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:43.116132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.677 [2024-12-10 12:32:43.116162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.677 [2024-12-10 12:32:43.116170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100952 len:8 PRP1 0x0 PRP2 0x0 00:24:35.677 [2024-12-10 12:32:43.116176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:43.116221] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:35.677 [2024-12-10 12:32:43.116239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:35.677 [2024-12-10 12:32:43.119099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:35.677 [2024-12-10 12:32:43.119130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa60fe0 (9): Bad file descriptor 00:24:35.677 [2024-12-10 12:32:43.142177] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:35.677 10950.00 IOPS, 42.77 MiB/s [2024-12-10T11:32:57.845Z] 10956.00 IOPS, 42.80 MiB/s [2024-12-10T11:32:57.845Z] 11018.75 IOPS, 43.04 MiB/s [2024-12-10T11:32:57.845Z] [2024-12-10 12:32:46.616219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.677 [2024-12-10 12:32:46.616255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.677 [2024-12-10 12:32:46.616529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.677 [2024-12-10 12:32:46.616537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.678 [2024-12-10 12:32:46.616863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.616990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.616998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.617006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.617013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.617021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.617028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.617036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.617045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.678 [2024-12-10 12:32:46.617054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.678 [2024-12-10 12:32:46.617061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.679 [2024-12-10 12:32:46.617661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.679 [2024-12-10 12:32:46.617667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.680 [2024-12-10 12:32:46.617681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.680 [2024-12-10 12:32:46.617696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.680 [2024-12-10 12:32:46.617711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.680 [2024-12-10 12:32:46.617726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.680 [2024-12-10 12:32:46.617741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.680 [2024-12-10 12:32:46.617756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.680 [2024-12-10 12:32:46.617770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.680 [2024-12-10 12:32:46.617785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.680 [2024-12-10 12:32:46.617799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.680 [2024-12-10 12:32:46.617818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.617846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.617853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.617868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.617874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23144 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.617880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.617892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.617898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23152 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.617905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.617917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.617923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23160 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.617929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617937] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.617943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.617949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.617956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.617969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.617974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23176 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.617980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.617988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.617994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23184 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23192 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23208 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23216 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23224 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23240 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23248 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23256 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.680 [2024-12-10 12:32:46.618266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.680 [2024-12-10 12:32:46.618271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.680 [2024-12-10 12:32:46.618277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23272 len:8 PRP1 0x0 PRP2 0x0 00:24:35.680 [2024-12-10 12:32:46.618283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.618290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.681 [2024-12-10 12:32:46.618294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.681 [2024-12-10 12:32:46.618300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:8 PRP1 0x0 PRP2 0x0 00:24:35.681 [2024-12-10 12:32:46.618306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.618313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.681 [2024-12-10 12:32:46.618318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.681 [2024-12-10 12:32:46.618323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22280 len:8 PRP1 0x0 PRP2 0x0 00:24:35.681 [2024-12-10 12:32:46.626999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.627011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.681 [2024-12-10 12:32:46.627017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.681 [2024-12-10 12:32:46.627023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22288 len:8 PRP1 0x0 PRP2 0x0 00:24:35.681 [2024-12-10 12:32:46.627030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.627039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.681 [2024-12-10 12:32:46.627044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.681 [2024-12-10 12:32:46.627050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22296 len:8 PRP1 0x0 PRP2 0x0 00:24:35.681 [2024-12-10 12:32:46.627056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.627063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.681 [2024-12-10 12:32:46.627069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.681 [2024-12-10 12:32:46.627075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:8 PRP1 0x0 PRP2 0x0 00:24:35.681 [2024-12-10 12:32:46.627084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.627091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.681 [2024-12-10 12:32:46.627096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.681 [2024-12-10 12:32:46.627101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22312 len:8 PRP1 0x0 PRP2 0x0 00:24:35.681 [2024-12-10 12:32:46.627109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.627118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.681 [2024-12-10 12:32:46.627122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.681 [2024-12-10 12:32:46.627128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22320 len:8 PRP1 0x0 PRP2 0x0 00:24:35.681 [2024-12-10 12:32:46.627134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.627184] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:35.681 [2024-12-10 12:32:46.627206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.681 [2024-12-10 12:32:46.627214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.627222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.681 [2024-12-10 12:32:46.627228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.627235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.681 [2024-12-10 12:32:46.627242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.627251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.681 [2024-12-10 12:32:46.627258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:46.627265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:35.681 [2024-12-10 12:32:46.627297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa60fe0 (9): Bad file descriptor 00:24:35.681 [2024-12-10 12:32:46.630941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:35.681 [2024-12-10 12:32:46.700261] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:35.681 10819.00 IOPS, 42.26 MiB/s [2024-12-10T11:32:57.849Z] 10864.50 IOPS, 42.44 MiB/s [2024-12-10T11:32:57.849Z] 10943.29 IOPS, 42.75 MiB/s [2024-12-10T11:32:57.849Z] 10972.88 IOPS, 42.86 MiB/s [2024-12-10T11:32:57.849Z] 10994.00 IOPS, 42.95 MiB/s [2024-12-10T11:32:57.849Z] [2024-12-10 12:32:51.055585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.681 [2024-12-10 12:32:51.055938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.681 [2024-12-10 12:32:51.055946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.055952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.055960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.055967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.055975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.055982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.055990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.055997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.682 [2024-12-10 12:32:51.056446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.682 [2024-12-10 12:32:51.056484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.682 [2024-12-10 12:32:51.056491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.683 [2024-12-10 12:32:51.056794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.056992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.056999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.057008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.057016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.057022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.057030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.057037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.057044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.057051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.057059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.057071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.057079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.057085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.057093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.683 [2024-12-10 12:32:51.057101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.683 [2024-12-10 12:32:51.057109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:35.684 [2024-12-10 12:32:51.057574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:35.684 [2024-12-10 12:32:51.057606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:35.684 [2024-12-10 12:32:51.057612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44944 len:8 PRP1 0x0 PRP2 0x0 00:24:35.684 [2024-12-10 12:32:51.057620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057667] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:35.684 [2024-12-10 12:32:51.057690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.684 [2024-12-10 12:32:51.057697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.684 [2024-12-10 12:32:51.057711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.684 [2024-12-10 12:32:51.057725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:35.684 [2024-12-10 12:32:51.057738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:35.684 [2024-12-10 12:32:51.057745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:35.684 [2024-12-10 12:32:51.057768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa60fe0 (9): Bad file descriptor 00:24:35.684 [2024-12-10 12:32:51.060618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:35.684 [2024-12-10 12:32:51.084894] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:35.684 10955.10 IOPS, 42.79 MiB/s [2024-12-10T11:32:57.852Z] 10952.09 IOPS, 42.78 MiB/s [2024-12-10T11:32:57.852Z] 10959.75 IOPS, 42.81 MiB/s [2024-12-10T11:32:57.852Z] 10977.62 IOPS, 42.88 MiB/s [2024-12-10T11:32:57.852Z] 10987.36 IOPS, 42.92 MiB/s [2024-12-10T11:32:57.852Z] 10989.40 IOPS, 42.93 MiB/s 00:24:35.684 Latency(us) 00:24:35.684 [2024-12-10T11:32:57.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.684 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:35.684 Verification LBA range: start 0x0 length 0x4000 00:24:35.684 NVMe0n1 : 15.01 10990.95 42.93 390.50 0.00 11223.75 432.75 17552.25 00:24:35.684 [2024-12-10T11:32:57.852Z] =================================================================================================================== 00:24:35.684 [2024-12-10T11:32:57.852Z] Total : 10990.95 42.93 390.50 0.00 11223.75 432.75 17552.25 00:24:35.685 Received shutdown signal, test time was about 15.000000 seconds 00:24:35.685 00:24:35.685 Latency(us) 00:24:35.685 [2024-12-10T11:32:57.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.685 [2024-12-10T11:32:57.853Z] =================================================================================================================== 00:24:35.685 [2024-12-10T11:32:57.853Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1729671 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1729671 /var/tmp/bdevperf.sock 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1729671 ']' 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:35.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:35.685 [2024-12-10 12:32:57.725385] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:35.685 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:35.943 [2024-12-10 12:32:57.913934] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:35.943 12:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:36.202 NVMe0n1 00:24:36.202 12:32:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:36.769 00:24:36.769 12:32:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:37.027 00:24:37.027 12:32:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:37.027 12:32:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:37.027 12:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.285 12:32:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:40.567 12:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.567 12:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:40.567 12:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:40.567 12:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1730572 00:24:40.567 12:33:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1730572 00:24:41.941 { 00:24:41.941 "results": [ 00:24:41.941 { 00:24:41.941 "job": "NVMe0n1", 00:24:41.941 "core_mask": "0x1", 00:24:41.941 "workload": "verify", 00:24:41.941 "status": "finished", 00:24:41.941 "verify_range": { 00:24:41.941 "start": 0, 00:24:41.941 "length": 16384 00:24:41.941 }, 00:24:41.941 "queue_depth": 128, 00:24:41.941 "io_size": 4096, 00:24:41.941 "runtime": 1.009288, 00:24:41.941 "iops": 11289.146408161001, 00:24:41.941 "mibps": 44.09822815687891, 00:24:41.941 "io_failed": 0, 00:24:41.941 "io_timeout": 0, 00:24:41.941 "avg_latency_us": 11297.073157954987, 00:24:41.941 "min_latency_us": 1894.8452173913045, 00:24:41.942 "max_latency_us": 11454.553043478261 00:24:41.942 } 00:24:41.942 ], 00:24:41.942 "core_count": 1 00:24:41.942 } 00:24:41.942 12:33:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:24:41.942 [2024-12-10 12:32:57.341886] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:41.942 [2024-12-10 12:32:57.341938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729671 ] 00:24:41.942 [2024-12-10 12:32:57.417149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.942 [2024-12-10 12:32:57.454400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.942 [2024-12-10 12:32:59.343978] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:41.942 [2024-12-10 12:32:59.344038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.942 [2024-12-10 12:32:59.344054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.942 [2024-12-10 12:32:59.344067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.942 [2024-12-10 12:32:59.344078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.942 [2024-12-10 12:32:59.344090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.942 [2024-12-10 12:32:59.344101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.942 [2024-12-10 12:32:59.344113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:41.942 [2024-12-10 12:32:59.344124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.942 [2024-12-10 12:32:59.344135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:41.942 [2024-12-10 12:32:59.344181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:41.942 [2024-12-10 12:32:59.344204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a7fe0 (9): Bad file descriptor 00:24:41.942 [2024-12-10 12:32:59.349115] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:41.942 Running I/O for 1 seconds... 00:24:41.942 11265.00 IOPS, 44.00 MiB/s 00:24:41.942 Latency(us) 00:24:41.942 [2024-12-10T11:33:04.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.942 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:41.942 Verification LBA range: start 0x0 length 0x4000 00:24:41.942 NVMe0n1 : 1.01 11289.15 44.10 0.00 0.00 11297.07 1894.85 11454.55 00:24:41.942 [2024-12-10T11:33:04.110Z] =================================================================================================================== 00:24:41.942 [2024-12-10T11:33:04.110Z] Total : 11289.15 44.10 0.00 0.00 11297.07 1894.85 11454.55 00:24:41.942 12:33:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.942 12:33:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:41.942 12:33:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.942 12:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.942 12:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:42.200 12:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:42.458 12:33:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1729671 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1729671 ']' 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1729671 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729671 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729671' 00:24:45.740 killing process with pid 1729671 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1729671 00:24:45.740 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1729671 00:24:45.998 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:45.998 12:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.998 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:45.998 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:24:45.998 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:45.998 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:45.998 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:45.998 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.998 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:45.998 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.998 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.998 rmmod nvme_tcp 00:24:46.257 rmmod nvme_fabrics 00:24:46.257 rmmod nvme_keyring 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1726726 ']' 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1726726 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1726726 ']' 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1726726 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1726726 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1726726' 00:24:46.257 killing process with pid 1726726 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1726726 00:24:46.257 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1726726 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.517 12:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.421 12:33:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.421 00:24:48.421 real 0m37.402s 00:24:48.421 user 1m58.281s 00:24:48.421 sys 0m7.926s 00:24:48.421 12:33:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.421 12:33:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.421 ************************************ 00:24:48.421 END TEST nvmf_failover 00:24:48.421 ************************************ 00:24:48.422 12:33:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:48.422 12:33:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:48.422 12:33:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.422 12:33:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.681 ************************************ 00:24:48.681 START TEST nvmf_host_discovery 00:24:48.681 ************************************ 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:48.681 * Looking for test storage... 00:24:48.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:48.681 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:48.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.682 --rc genhtml_branch_coverage=1 00:24:48.682 --rc genhtml_function_coverage=1 00:24:48.682 --rc genhtml_legend=1 00:24:48.682 --rc geninfo_all_blocks=1 00:24:48.682 --rc geninfo_unexecuted_blocks=1 00:24:48.682 00:24:48.682 ' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:48.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.682 --rc genhtml_branch_coverage=1 00:24:48.682 --rc genhtml_function_coverage=1 00:24:48.682 --rc genhtml_legend=1 00:24:48.682 --rc geninfo_all_blocks=1 00:24:48.682 --rc geninfo_unexecuted_blocks=1 00:24:48.682 00:24:48.682 ' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:48.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.682 --rc genhtml_branch_coverage=1 00:24:48.682 --rc genhtml_function_coverage=1 00:24:48.682 --rc genhtml_legend=1 00:24:48.682 --rc geninfo_all_blocks=1 00:24:48.682 --rc geninfo_unexecuted_blocks=1 00:24:48.682 00:24:48.682 ' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:48.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.682 --rc genhtml_branch_coverage=1 00:24:48.682 --rc genhtml_function_coverage=1 00:24:48.682 --rc genhtml_legend=1 00:24:48.682 --rc geninfo_all_blocks=1 00:24:48.682 --rc geninfo_unexecuted_blocks=1 00:24:48.682 00:24:48.682 ' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.682 12:33:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:55.252 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:55.252 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:55.252 Found net devices under 0000:86:00.0: cvl_0_0 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:55.252 Found net devices under 0000:86:00.1: cvl_0_1 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:55.252 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:55.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:24:55.253 00:24:55.253 --- 10.0.0.2 ping statistics --- 00:24:55.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.253 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:24:55.253 00:24:55.253 --- 10.0.0.1 ping statistics --- 00:24:55.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.253 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1735421 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1735421 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1735421 ']' 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.253 12:33:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.253 [2024-12-10 12:33:16.797644] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:55.253 [2024-12-10 12:33:16.797689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.253 [2024-12-10 12:33:16.878860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.253 [2024-12-10 12:33:16.917032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.253 [2024-12-10 12:33:16.917066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.253 [2024-12-10 12:33:16.917076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.253 [2024-12-10 12:33:16.917083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.253 [2024-12-10 12:33:16.917088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.253 [2024-12-10 12:33:16.917622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.253 [2024-12-10 12:33:17.060490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.253 [2024-12-10 12:33:17.072714] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.253 null0 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.253 null1 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1735456 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1735456 /tmp/host.sock 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1735456 ']' 00:24:55.253 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:55.254 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.254 [2024-12-10 12:33:17.150891] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:24:55.254 [2024-12-10 12:33:17.150934] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735456 ] 00:24:55.254 [2024-12-10 12:33:17.225923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.254 [2024-12-10 12:33:17.267401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:55.254 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.513 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.772 [2024-12-10 12:33:17.690272] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:55.772 12:33:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:56.339 [2024-12-10 12:33:18.428311] bdev_nvme.c:7517:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:56.339 [2024-12-10 12:33:18.428330] bdev_nvme.c:7603:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:56.339 [2024-12-10 12:33:18.428344] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:56.598 [2024-12-10 12:33:18.514598] bdev_nvme.c:7446:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:56.598 [2024-12-10 12:33:18.610225] bdev_nvme.c:5662:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:56.598 [2024-12-10 12:33:18.611002] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc739a0:1 started. 00:24:56.598 [2024-12-10 12:33:18.612371] bdev_nvme.c:7336:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:56.598 [2024-12-10 12:33:18.612386] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:56.598 [2024-12-10 12:33:18.617324] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc739a0 was disconnected and freed. delete nvme_qpair. 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:56.857 12:33:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.857 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:57.116 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:57.374 [2024-12-10 12:33:19.321740] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc73d20:1 started. 00:24:57.374 [2024-12-10 12:33:19.329086] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc73d20 was disconnected and freed. delete nvme_qpair. 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:57.374 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.375 [2024-12-10 12:33:19.402899] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:57.375 [2024-12-10 12:33:19.403016] bdev_nvme.c:7499:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:57.375 [2024-12-10 12:33:19.403035] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:57.375 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.375 [2024-12-10 12:33:19.529751] bdev_nvme.c:7441:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:57.634 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:57.634 12:33:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:57.892 [2024-12-10 12:33:19.840059] bdev_nvme.c:5662:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:57.892 [2024-12-10 12:33:19.840095] bdev_nvme.c:7336:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:57.892 [2024-12-10 12:33:19.840103] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:57.892 [2024-12-10 12:33:19.840108] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:58.459 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.719 [2024-12-10 12:33:20.659320] bdev_nvme.c:7499:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:58.719 [2024-12-10 12:33:20.659343] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:58.719 [2024-12-10 12:33:20.666090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.719 [2024-12-10 12:33:20.666111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.719 [2024-12-10 12:33:20.666119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.719 [2024-12-10 12:33:20.666131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.719 [2024-12-10 12:33:20.666138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.719 [2024-12-10 12:33:20.666145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.719 [2024-12-10 12:33:20.666152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.719 [2024-12-10 12:33:20.666163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.719 [2024-12-10 12:33:20.666170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45970 is same with the state(6) to be set 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.719 [2024-12-10 12:33:20.676101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc45970 (9): Bad file descriptor 00:24:58.719 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.719 [2024-12-10 12:33:20.686137] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:58.719 [2024-12-10 12:33:20.686148] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:58.719 [2024-12-10 12:33:20.686155] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:58.719 [2024-12-10 12:33:20.686165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:58.719 [2024-12-10 12:33:20.686185] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:58.719 [2024-12-10 12:33:20.686368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.719 [2024-12-10 12:33:20.686384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc45970 with addr=10.0.0.2, port=4420 00:24:58.719 [2024-12-10 12:33:20.686392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45970 is same with the state(6) to be set 00:24:58.719 [2024-12-10 12:33:20.686405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc45970 (9): Bad file descriptor 00:24:58.719 [2024-12-10 12:33:20.686427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:58.719 [2024-12-10 12:33:20.686435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:58.719 [2024-12-10 12:33:20.686443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:58.719 [2024-12-10 12:33:20.686449] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:58.719 [2024-12-10 12:33:20.686454] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:58.719 [2024-12-10 12:33:20.686459] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:58.719 [2024-12-10 12:33:20.696218] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:58.719 [2024-12-10 12:33:20.696231] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:58.720 [2024-12-10 12:33:20.696235] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:58.720 [2024-12-10 12:33:20.696239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:58.720 [2024-12-10 12:33:20.696253] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:58.720 [2024-12-10 12:33:20.696519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.720 [2024-12-10 12:33:20.696534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc45970 with addr=10.0.0.2, port=4420 00:24:58.720 [2024-12-10 12:33:20.696542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45970 is same with the state(6) to be set 00:24:58.720 [2024-12-10 12:33:20.696553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc45970 (9): Bad file descriptor 00:24:58.720 [2024-12-10 12:33:20.696570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:58.720 [2024-12-10 12:33:20.696577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:58.720 [2024-12-10 12:33:20.696584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:58.720 [2024-12-10 12:33:20.696590] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:58.720 [2024-12-10 12:33:20.696594] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:58.720 [2024-12-10 12:33:20.696597] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:58.720 [2024-12-10 12:33:20.706285] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:58.720 [2024-12-10 12:33:20.706301] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:58.720 [2024-12-10 12:33:20.706305] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:58.720 [2024-12-10 12:33:20.706310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:58.720 [2024-12-10 12:33:20.706326] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:58.720 [2024-12-10 12:33:20.706580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.720 [2024-12-10 12:33:20.706595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc45970 with addr=10.0.0.2, port=4420 00:24:58.720 [2024-12-10 12:33:20.706603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45970 is same with the state(6) to be set 00:24:58.720 [2024-12-10 12:33:20.706615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc45970 (9): Bad file descriptor 00:24:58.720 [2024-12-10 12:33:20.706637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:58.720 [2024-12-10 12:33:20.706644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:58.720 [2024-12-10 12:33:20.706652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:58.720 [2024-12-10 12:33:20.706657] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:58.720 [2024-12-10 12:33:20.706662] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:58.720 [2024-12-10 12:33:20.706666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:58.720 [2024-12-10 12:33:20.716357] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:58.720 [2024-12-10 12:33:20.716370] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:58.720 [2024-12-10 12:33:20.716374] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:58.720 [2024-12-10 12:33:20.716378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:58.720 [2024-12-10 12:33:20.716392] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:58.720 [2024-12-10 12:33:20.716565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.720 [2024-12-10 12:33:20.716591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc45970 with addr=10.0.0.2, port=4420 00:24:58.720 [2024-12-10 12:33:20.716605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45970 is same with the state(6) to be set 00:24:58.720 [2024-12-10 12:33:20.716619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc45970 (9): Bad file descriptor 00:24:58.720 [2024-12-10 12:33:20.716631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:58.720 [2024-12-10 12:33:20.716639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:58.720 [2024-12-10 12:33:20.716646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:58.720 [2024-12-10 12:33:20.716654] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:58.720 [2024-12-10 12:33:20.716659] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:58.720 [2024-12-10 12:33:20.716663] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:58.720 [2024-12-10 12:33:20.726423] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:58.720 [2024-12-10 12:33:20.726437] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:58.720 [2024-12-10 12:33:20.726442] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:58.720 [2024-12-10 12:33:20.726446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:58.720 [2024-12-10 12:33:20.726464] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:58.720 [2024-12-10 12:33:20.726623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.720 [2024-12-10 12:33:20.726637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc45970 with addr=10.0.0.2, port=4420 00:24:58.720 [2024-12-10 12:33:20.726645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45970 is same with the state(6) to be set 00:24:58.720 [2024-12-10 12:33:20.726657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc45970 (9): Bad file descriptor 00:24:58.720 [2024-12-10 12:33:20.726666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:58.720 [2024-12-10 12:33:20.726673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:58.720 [2024-12-10 12:33:20.726680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:58.720 [2024-12-10 12:33:20.726685] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:58.720 [2024-12-10 12:33:20.726690] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:58.720 [2024-12-10 12:33:20.726694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:58.720 [2024-12-10 12:33:20.736495] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:58.720 [2024-12-10 12:33:20.736505] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:58.720 [2024-12-10 12:33:20.736509] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:58.720 [2024-12-10 12:33:20.736513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:58.720 [2024-12-10 12:33:20.736526] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:58.720 [2024-12-10 12:33:20.736680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.720 [2024-12-10 12:33:20.736692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc45970 with addr=10.0.0.2, port=4420 00:24:58.720 [2024-12-10 12:33:20.736699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc45970 is same with the state(6) to be set 00:24:58.720 [2024-12-10 12:33:20.736709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc45970 (9): Bad file descriptor 00:24:58.720 [2024-12-10 12:33:20.736719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:58.720 [2024-12-10 12:33:20.736726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:58.720 [2024-12-10 12:33:20.736733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:58.720 [2024-12-10 12:33:20.736739] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:58.720 [2024-12-10 12:33:20.736744] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:58.720 [2024-12-10 12:33:20.736747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:58.720 [2024-12-10 12:33:20.745376] bdev_nvme.c:7304:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:58.720 [2024-12-10 12:33:20.745394] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:58.720 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.721 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.980 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:58.981 12:33:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.981 12:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:58.981 12:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:58.981 12:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:58.981 12:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:58.981 12:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:58.981 12:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.981 12:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:59.916 [2024-12-10 12:33:22.037312] bdev_nvme.c:7517:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:59.916 [2024-12-10 12:33:22.037328] bdev_nvme.c:7603:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:59.916 [2024-12-10 12:33:22.037338] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:00.175 [2024-12-10 12:33:22.125601] bdev_nvme.c:7446:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:00.435 [2024-12-10 12:33:22.436995] bdev_nvme.c:5662:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:00.435 [2024-12-10 12:33:22.437630] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xc7ee70:1 started. 00:25:00.435 [2024-12-10 12:33:22.439251] bdev_nvme.c:7336:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:00.435 [2024-12-10 12:33:22.439276] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.435 request: 00:25:00.435 { 00:25:00.435 "name": "nvme", 00:25:00.435 "trtype": "tcp", 00:25:00.435 "traddr": "10.0.0.2", 00:25:00.435 "adrfam": "ipv4", 00:25:00.435 "trsvcid": "8009", 00:25:00.435 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:00.435 "wait_for_attach": true, 00:25:00.435 "method": "bdev_nvme_start_discovery", 00:25:00.435 "req_id": 1 00:25:00.435 } 00:25:00.435 Got JSON-RPC error response 00:25:00.435 response: 00:25:00.435 { 00:25:00.435 "code": -17, 00:25:00.435 "message": "File exists" 00:25:00.435 } 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.435 [2024-12-10 12:33:22.487804] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xc7ee70 was disconnected and freed. delete nvme_qpair. 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.435 request: 00:25:00.435 { 00:25:00.435 "name": "nvme_second", 00:25:00.435 "trtype": "tcp", 00:25:00.435 "traddr": "10.0.0.2", 00:25:00.435 "adrfam": "ipv4", 00:25:00.435 "trsvcid": "8009", 00:25:00.435 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:00.435 "wait_for_attach": true, 00:25:00.435 "method": "bdev_nvme_start_discovery", 00:25:00.435 "req_id": 1 00:25:00.435 } 00:25:00.435 Got JSON-RPC error response 00:25:00.435 response: 00:25:00.435 { 00:25:00.435 "code": -17, 00:25:00.435 "message": "File exists" 00:25:00.435 } 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:00.435 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:00.694 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:00.695 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.695 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:00.695 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.695 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:00.695 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.695 12:33:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.630 [2024-12-10 12:33:23.674726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-12-10 12:33:23.674752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc737a0 with addr=10.0.0.2, port=8010 00:25:01.630 [2024-12-10 12:33:23.674767] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:01.630 [2024-12-10 12:33:23.674773] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:01.630 [2024-12-10 12:33:23.674780] bdev_nvme.c:7585:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:02.565 [2024-12-10 12:33:24.677195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.565 [2024-12-10 12:33:24.677221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc737a0 with addr=10.0.0.2, port=8010 00:25:02.565 [2024-12-10 12:33:24.677233] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:02.565 [2024-12-10 12:33:24.677240] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:02.565 [2024-12-10 12:33:24.677246] bdev_nvme.c:7585:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:03.940 [2024-12-10 12:33:25.679362] bdev_nvme.c:7560:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:03.940 request: 00:25:03.940 { 00:25:03.940 "name": "nvme_second", 00:25:03.940 "trtype": "tcp", 00:25:03.940 "traddr": "10.0.0.2", 00:25:03.940 "adrfam": "ipv4", 00:25:03.940 "trsvcid": "8010", 00:25:03.940 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:03.940 "wait_for_attach": false, 00:25:03.940 "attach_timeout_ms": 3000, 00:25:03.940 "method": "bdev_nvme_start_discovery", 00:25:03.941 "req_id": 1 00:25:03.941 } 00:25:03.941 Got JSON-RPC error response 00:25:03.941 response: 00:25:03.941 { 00:25:03.941 "code": -110, 00:25:03.941 "message": "Connection timed out" 00:25:03.941 } 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1735456 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:03.941 rmmod nvme_tcp 00:25:03.941 rmmod nvme_fabrics 00:25:03.941 rmmod nvme_keyring 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1735421 ']' 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1735421 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1735421 ']' 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1735421 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735421 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735421' 00:25:03.941 killing process with pid 1735421 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1735421 00:25:03.941 12:33:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1735421 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.941 12:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:06.478 00:25:06.478 real 0m17.495s 00:25:06.478 user 0m20.960s 00:25:06.478 sys 0m5.879s 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.478 ************************************ 00:25:06.478 END TEST nvmf_host_discovery 00:25:06.478 ************************************ 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.478 ************************************ 00:25:06.478 START TEST nvmf_host_multipath_status 00:25:06.478 ************************************ 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:06.478 * Looking for test storage... 00:25:06.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.478 --rc genhtml_branch_coverage=1 00:25:06.478 --rc genhtml_function_coverage=1 00:25:06.478 --rc genhtml_legend=1 00:25:06.478 --rc geninfo_all_blocks=1 00:25:06.478 --rc geninfo_unexecuted_blocks=1 00:25:06.478 00:25:06.478 ' 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.478 --rc genhtml_branch_coverage=1 00:25:06.478 --rc genhtml_function_coverage=1 00:25:06.478 --rc genhtml_legend=1 00:25:06.478 --rc geninfo_all_blocks=1 00:25:06.478 --rc geninfo_unexecuted_blocks=1 00:25:06.478 00:25:06.478 ' 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.478 --rc genhtml_branch_coverage=1 00:25:06.478 --rc genhtml_function_coverage=1 00:25:06.478 --rc genhtml_legend=1 00:25:06.478 --rc geninfo_all_blocks=1 00:25:06.478 --rc geninfo_unexecuted_blocks=1 00:25:06.478 00:25:06.478 ' 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:06.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.478 --rc genhtml_branch_coverage=1 00:25:06.478 --rc genhtml_function_coverage=1 00:25:06.478 --rc genhtml_legend=1 00:25:06.478 --rc geninfo_all_blocks=1 00:25:06.478 --rc geninfo_unexecuted_blocks=1 00:25:06.478 00:25:06.478 ' 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.478 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:06.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/bpftrace.sh 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:06.479 12:33:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:13.058 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.058 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.058 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.058 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.058 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.058 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.058 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.058 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.058 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.058 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:13.059 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:13.059 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:13.059 Found net devices under 0000:86:00.0: cvl_0_0 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:13.059 Found net devices under 0000:86:00.1: cvl_0_1 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.059 12:33:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:25:13.059 00:25:13.059 --- 10.0.0.2 ping statistics --- 00:25:13.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.059 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:25:13.059 00:25:13.059 --- 10.0.0.1 ping statistics --- 00:25:13.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.059 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1740539 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1740539 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1740539 ']' 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:13.059 [2024-12-10 12:33:34.358633] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:25:13.059 [2024-12-10 12:33:34.358679] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.059 [2024-12-10 12:33:34.437042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:13.059 [2024-12-10 12:33:34.477861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.059 [2024-12-10 12:33:34.477901] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.059 [2024-12-10 12:33:34.477908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.059 [2024-12-10 12:33:34.477914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.059 [2024-12-10 12:33:34.477919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.059 [2024-12-10 12:33:34.479128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.059 [2024-12-10 12:33:34.479131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1740539 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:13.059 [2024-12-10 12:33:34.783907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.059 12:33:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:13.059 Malloc0 00:25:13.059 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:13.334 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:13.334 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.613 [2024-12-10 12:33:35.595877] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.613 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:13.952 [2024-12-10 12:33:35.800372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:13.952 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1740844 00:25:13.952 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:13.952 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:13.952 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1740844 /var/tmp/bdevperf.sock 00:25:13.952 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1740844 ']' 00:25:13.952 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:13.952 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.952 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:13.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:13.952 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.952 12:33:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:13.952 12:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.952 12:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:25:13.952 12:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:14.210 12:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:14.777 Nvme0n1 00:25:14.777 12:33:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:15.035 Nvme0n1 00:25:15.036 12:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:15.036 12:33:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:16.940 12:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:16.940 12:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:17.200 12:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:17.459 12:33:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:18.394 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:18.394 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:18.394 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.394 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.653 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.653 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:18.653 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.653 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.912 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.912 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.912 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.912 12:33:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:19.171 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.171 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:19.171 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.171 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:19.429 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.429 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:19.429 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.429 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:19.429 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.429 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:19.429 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.429 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:19.688 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.688 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:19.688 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:19.946 12:33:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:20.205 12:33:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:21.140 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:21.140 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:21.140 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.140 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:21.399 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.399 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:21.399 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.399 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:21.658 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.658 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.658 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.658 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.658 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.658 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.658 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.658 12:33:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.917 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.917 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:21.917 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:21.917 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.175 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.175 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:22.175 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.175 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:22.434 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.434 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:22.434 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:22.693 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:22.951 12:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:23.885 12:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:23.885 12:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:23.885 12:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.885 12:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:24.144 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.144 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:24.144 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.144 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.402 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:24.402 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:24.402 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.402 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.402 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.402 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.403 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.403 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.661 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.661 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.661 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.661 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.920 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.920 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:24.920 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.920 12:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:25.179 12:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.179 12:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:25.179 12:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.437 12:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:25.437 12:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:26.814 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:26.814 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:26.814 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.814 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.814 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.814 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:26.814 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.814 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.073 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.073 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.073 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.073 12:33:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.073 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.073 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.073 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.073 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.332 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.332 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.332 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.332 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.591 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.591 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:27.591 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.591 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.849 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.849 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:27.849 12:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:27.849 12:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:28.108 12:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.484 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.742 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.742 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.742 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.742 12:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.001 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.001 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:30.001 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.001 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.259 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.259 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:30.259 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.259 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.259 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:30.260 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:30.260 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:30.525 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:30.784 12:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:31.719 12:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:31.719 12:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:31.719 12:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.719 12:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:31.977 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:31.977 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:31.977 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:31.977 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.236 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.236 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.236 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.236 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.494 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.494 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.494 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.495 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.753 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.753 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:32.753 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.753 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.753 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.753 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:32.753 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.753 12:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:33.011 12:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.011 12:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:33.268 12:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:33.268 12:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:33.527 12:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:33.785 12:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:34.717 12:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:34.717 12:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:34.717 12:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.717 12:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.975 12:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.975 12:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:34.975 12:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.975 12:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:35.234 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.234 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:35.234 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.234 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:35.234 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.234 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:35.234 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.234 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.493 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.493 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.493 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.493 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.752 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.752 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:35.752 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.752 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:36.011 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.011 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:36.011 12:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:36.270 12:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:36.270 12:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:37.647 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:37.647 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:37.647 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.647 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.647 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.647 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:37.647 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.647 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.906 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.906 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.906 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.906 12:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:37.906 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.906 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.906 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.906 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:38.165 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.165 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:38.165 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.165 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.424 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.424 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:38.424 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.424 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.683 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.683 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:38.683 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:38.942 12:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:39.202 12:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:40.220 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:40.220 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.220 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.220 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.220 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.220 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:40.220 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.220 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:40.479 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.479 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:40.479 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.479 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:40.738 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.738 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:40.738 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.738 12:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.997 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.997 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:40.998 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.998 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:41.257 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.257 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:41.257 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.257 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:41.516 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.516 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:41.516 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:41.516 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:41.776 12:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:42.713 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:42.713 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:42.977 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.977 12:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.977 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.977 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:42.977 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.977 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:43.236 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:43.236 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:43.236 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.236 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:43.494 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.495 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:43.495 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.495 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:43.753 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.753 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:43.754 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:43.754 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:44.013 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.013 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:44.013 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:44.013 12:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.013 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.013 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1740844 00:25:44.013 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1740844 ']' 00:25:44.013 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1740844 00:25:44.013 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:44.013 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.013 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740844 00:25:44.277 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:44.277 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:44.277 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740844' 00:25:44.277 killing process with pid 1740844 00:25:44.277 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1740844 00:25:44.277 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1740844 00:25:44.277 { 00:25:44.277 "results": [ 00:25:44.277 { 00:25:44.277 "job": "Nvme0n1", 00:25:44.277 "core_mask": "0x4", 00:25:44.277 "workload": "verify", 00:25:44.277 "status": "terminated", 00:25:44.277 "verify_range": { 00:25:44.277 "start": 0, 00:25:44.277 "length": 16384 00:25:44.277 }, 00:25:44.277 "queue_depth": 128, 00:25:44.277 "io_size": 4096, 00:25:44.277 "runtime": 29.005128, 00:25:44.277 "iops": 10514.071856535162, 00:25:44.277 "mibps": 41.070593189590475, 00:25:44.277 "io_failed": 0, 00:25:44.277 "io_timeout": 0, 00:25:44.277 "avg_latency_us": 12154.460551498505, 00:25:44.277 "min_latency_us": 141.5791304347826, 00:25:44.277 "max_latency_us": 3019898.88 00:25:44.277 } 00:25:44.277 ], 00:25:44.277 "core_count": 1 00:25:44.277 } 00:25:44.277 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1740844 00:25:44.277 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:25:44.277 [2024-12-10 12:33:35.876079] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:25:44.277 [2024-12-10 12:33:35.876132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1740844 ] 00:25:44.277 [2024-12-10 12:33:35.953267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.277 [2024-12-10 12:33:35.993426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.277 Running I/O for 90 seconds... 00:25:44.277 11123.00 IOPS, 43.45 MiB/s [2024-12-10T11:34:06.445Z] 11149.50 IOPS, 43.55 MiB/s [2024-12-10T11:34:06.445Z] 11165.67 IOPS, 43.62 MiB/s [2024-12-10T11:34:06.445Z] 11193.25 IOPS, 43.72 MiB/s [2024-12-10T11:34:06.445Z] 11218.40 IOPS, 43.82 MiB/s [2024-12-10T11:34:06.445Z] 11272.00 IOPS, 44.03 MiB/s [2024-12-10T11:34:06.445Z] 11286.14 IOPS, 44.09 MiB/s [2024-12-10T11:34:06.445Z] 11330.50 IOPS, 44.26 MiB/s [2024-12-10T11:34:06.445Z] 11333.78 IOPS, 44.27 MiB/s [2024-12-10T11:34:06.445Z] 11324.70 IOPS, 44.24 MiB/s [2024-12-10T11:34:06.445Z] 11320.27 IOPS, 44.22 MiB/s [2024-12-10T11:34:06.445Z] 11323.08 IOPS, 44.23 MiB/s [2024-12-10T11:34:06.445Z] [2024-12-10 12:33:49.990874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.277 [2024-12-10 12:33:49.990916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.990955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.990964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.990977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.990985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.990998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.277 [2024-12-10 12:33:49.991397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:44.277 [2024-12-10 12:33:49.991410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.991631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.991638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:44.278 [2024-12-10 12:33:49.992649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.278 [2024-12-10 12:33:49.992657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.992985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.992992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.279 [2024-12-10 12:33:49.993480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:44.279 [2024-12-10 12:33:49.993497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.993986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.993993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.280 [2024-12-10 12:33:49.994094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:33:49.994391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:33:49.994398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.280 11153.31 IOPS, 43.57 MiB/s [2024-12-10T11:34:06.448Z] 10356.64 IOPS, 40.46 MiB/s [2024-12-10T11:34:06.448Z] 9666.20 IOPS, 37.76 MiB/s [2024-12-10T11:34:06.448Z] 9198.00 IOPS, 35.93 MiB/s [2024-12-10T11:34:06.448Z] 9320.06 IOPS, 36.41 MiB/s [2024-12-10T11:34:06.448Z] 9433.22 IOPS, 36.85 MiB/s [2024-12-10T11:34:06.448Z] 9600.79 IOPS, 37.50 MiB/s [2024-12-10T11:34:06.448Z] 9787.45 IOPS, 38.23 MiB/s [2024-12-10T11:34:06.448Z] 9957.86 IOPS, 38.90 MiB/s [2024-12-10T11:34:06.448Z] 10016.36 IOPS, 39.13 MiB/s [2024-12-10T11:34:06.448Z] 10080.43 IOPS, 39.38 MiB/s [2024-12-10T11:34:06.448Z] 10127.88 IOPS, 39.56 MiB/s [2024-12-10T11:34:06.448Z] 10255.40 IOPS, 40.06 MiB/s [2024-12-10T11:34:06.448Z] 10374.04 IOPS, 40.52 MiB/s [2024-12-10T11:34:06.448Z] [2024-12-10 12:34:03.843641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.280 [2024-12-10 12:34:03.843681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.280 [2024-12-10 12:34:03.843716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.843725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.843751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.843771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.843790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.281 [2024-12-10 12:34:03.843810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.281 [2024-12-10 12:34:03.843829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.281 [2024-12-10 12:34:03.843850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.281 [2024-12-10 12:34:03.843869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.843888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.843909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.843928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.843948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.843966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.843979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:127784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.843989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.281 [2024-12-10 12:34:03.844828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:44.281 [2024-12-10 12:34:03.844840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.844847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.844860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.844867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.844879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.844886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.844898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.844905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.844918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.844925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.844937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.844944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.844956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.844963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.844978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.844985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.844997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.845004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.845024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.845220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.845243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.845263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.845804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.282 [2024-12-10 12:34:03.845827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.845978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.845985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.846000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.846008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.846021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.846028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.846042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.846050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.282 [2024-12-10 12:34:03.846064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:44.282 [2024-12-10 12:34:03.846071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.282 10454.85 IOPS, 40.84 MiB/s [2024-12-10T11:34:06.450Z] 10486.71 IOPS, 40.96 MiB/s [2024-12-10T11:34:06.450Z] Received shutdown signal, test time was about 29.005784 seconds 00:25:44.282 00:25:44.282 Latency(us) 00:25:44.282 [2024-12-10T11:34:06.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.282 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:44.283 Verification LBA range: start 0x0 length 0x4000 00:25:44.283 Nvme0n1 : 29.01 10514.07 41.07 0.00 0.00 12154.46 141.58 3019898.88 00:25:44.283 [2024-12-10T11:34:06.451Z] =================================================================================================================== 00:25:44.283 [2024-12-10T11:34:06.451Z] Total : 10514.07 41.07 0.00 0.00 12154.46 141.58 3019898.88 00:25:44.283 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.542 rmmod nvme_tcp 00:25:44.542 rmmod nvme_fabrics 00:25:44.542 rmmod nvme_keyring 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1740539 ']' 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1740539 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1740539 ']' 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1740539 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.542 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1740539 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1740539' 00:25:44.802 killing process with pid 1740539 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1740539 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1740539 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:44.802 12:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.341 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.341 00:25:47.341 real 0m40.809s 00:25:47.341 user 1m50.746s 00:25:47.341 sys 0m11.637s 00:25:47.341 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.341 12:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.341 ************************************ 00:25:47.341 END TEST nvmf_host_multipath_status 00:25:47.341 ************************************ 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.342 ************************************ 00:25:47.342 START TEST nvmf_discovery_remove_ifc 00:25:47.342 ************************************ 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:47.342 * Looking for test storage... 00:25:47.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:47.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.342 --rc genhtml_branch_coverage=1 00:25:47.342 --rc genhtml_function_coverage=1 00:25:47.342 --rc genhtml_legend=1 00:25:47.342 --rc geninfo_all_blocks=1 00:25:47.342 --rc geninfo_unexecuted_blocks=1 00:25:47.342 00:25:47.342 ' 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:47.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.342 --rc genhtml_branch_coverage=1 00:25:47.342 --rc genhtml_function_coverage=1 00:25:47.342 --rc genhtml_legend=1 00:25:47.342 --rc geninfo_all_blocks=1 00:25:47.342 --rc geninfo_unexecuted_blocks=1 00:25:47.342 00:25:47.342 ' 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:47.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.342 --rc genhtml_branch_coverage=1 00:25:47.342 --rc genhtml_function_coverage=1 00:25:47.342 --rc genhtml_legend=1 00:25:47.342 --rc geninfo_all_blocks=1 00:25:47.342 --rc geninfo_unexecuted_blocks=1 00:25:47.342 00:25:47.342 ' 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:47.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.342 --rc genhtml_branch_coverage=1 00:25:47.342 --rc genhtml_function_coverage=1 00:25:47.342 --rc genhtml_legend=1 00:25:47.342 --rc geninfo_all_blocks=1 00:25:47.342 --rc geninfo_unexecuted_blocks=1 00:25:47.342 00:25:47.342 ' 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.342 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.343 12:34:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.919 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.919 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.919 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.919 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:53.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:53.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:53.920 Found net devices under 0000:86:00.0: cvl_0_0 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:53.920 Found net devices under 0000:86:00.1: cvl_0_1 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.920 12:34:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.920 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.920 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.920 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:53.920 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.920 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.920 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.920 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:53.920 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:53.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:25:53.920 00:25:53.920 --- 10.0.0.2 ping statistics --- 00:25:53.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.920 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:25:53.921 00:25:53.921 --- 10.0.0.1 ping statistics --- 00:25:53.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.921 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1749539 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1749539 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1749539 ']' 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.921 [2024-12-10 12:34:15.255325] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:25:53.921 [2024-12-10 12:34:15.255381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.921 [2024-12-10 12:34:15.335377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.921 [2024-12-10 12:34:15.373824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.921 [2024-12-10 12:34:15.373859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.921 [2024-12-10 12:34:15.373866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.921 [2024-12-10 12:34:15.373872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.921 [2024-12-10 12:34:15.373877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.921 [2024-12-10 12:34:15.374404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.921 [2024-12-10 12:34:15.526369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.921 [2024-12-10 12:34:15.534577] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:53.921 null0 00:25:53.921 [2024-12-10 12:34:15.566535] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1749568 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1749568 /tmp/host.sock 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1749568 ']' 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:53.921 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.921 [2024-12-10 12:34:15.637546] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:25:53.921 [2024-12-10 12:34:15.637587] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1749568 ] 00:25:53.921 [2024-12-10 12:34:15.711620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.921 [2024-12-10 12:34:15.751485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.921 12:34:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:54.859 [2024-12-10 12:34:16.902638] bdev_nvme.c:7517:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:54.859 [2024-12-10 12:34:16.902656] bdev_nvme.c:7603:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:54.859 [2024-12-10 12:34:16.902668] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.119 [2024-12-10 12:34:17.031062] bdev_nvme.c:7446:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:55.119 [2024-12-10 12:34:17.132771] bdev_nvme.c:5662:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:55.119 [2024-12-10 12:34:17.133475] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1531940:1 started. 00:25:55.119 [2024-12-10 12:34:17.134800] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:55.119 [2024-12-10 12:34:17.134841] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:55.119 [2024-12-10 12:34:17.134861] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:55.119 [2024-12-10 12:34:17.134874] bdev_nvme.c:7336:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:55.119 [2024-12-10 12:34:17.134891] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.119 [2024-12-10 12:34:17.141889] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1531940 was disconnected and freed. delete nvme_qpair. 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:55.119 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.120 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:55.379 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.379 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:55.379 12:34:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:56.318 12:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:56.318 12:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.318 12:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:56.318 12:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.318 12:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:56.318 12:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:56.318 12:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:56.318 12:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.318 12:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:56.318 12:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:57.255 12:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:57.256 12:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.256 12:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:57.256 12:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.256 12:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:57.256 12:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:57.256 12:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:57.256 12:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.515 12:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:57.515 12:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:58.452 12:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:58.452 12:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.452 12:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:58.452 12:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.452 12:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:58.452 12:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:58.452 12:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:58.452 12:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.452 12:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:58.452 12:34:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:59.390 12:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:59.390 12:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.390 12:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:59.390 12:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.390 12:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:59.390 12:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:59.390 12:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:59.390 12:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.390 12:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:59.390 12:34:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.769 12:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:00.769 12:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.769 12:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:00.769 12:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.769 12:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:00.769 12:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.769 12:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:00.769 12:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.769 12:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:00.769 12:34:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:00.769 [2024-12-10 12:34:22.576446] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:00.769 [2024-12-10 12:34:22.576493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.769 [2024-12-10 12:34:22.576504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.769 [2024-12-10 12:34:22.576514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.769 [2024-12-10 12:34:22.576521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.770 [2024-12-10 12:34:22.576528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.770 [2024-12-10 12:34:22.576536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.770 [2024-12-10 12:34:22.576543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.770 [2024-12-10 12:34:22.576550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.770 [2024-12-10 12:34:22.576557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:00.770 [2024-12-10 12:34:22.576565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.770 [2024-12-10 12:34:22.576571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150e120 is same with the state(6) to be set 00:26:00.770 [2024-12-10 12:34:22.586467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150e120 (9): Bad file descriptor 00:26:00.770 [2024-12-10 12:34:22.596506] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:00.770 [2024-12-10 12:34:22.596517] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:00.770 [2024-12-10 12:34:22.596524] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:00.770 [2024-12-10 12:34:22.596529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:00.770 [2024-12-10 12:34:22.596553] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:01.708 12:34:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:01.708 12:34:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.708 12:34:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:01.708 12:34:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.708 12:34:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:01.708 12:34:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.708 12:34:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:01.708 [2024-12-10 12:34:23.610228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:01.708 [2024-12-10 12:34:23.610297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x150e120 with addr=10.0.0.2, port=4420 00:26:01.708 [2024-12-10 12:34:23.610327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150e120 is same with the state(6) to be set 00:26:01.708 [2024-12-10 12:34:23.610377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x150e120 (9): Bad file descriptor 00:26:01.708 [2024-12-10 12:34:23.611330] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:01.708 [2024-12-10 12:34:23.611395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:01.708 [2024-12-10 12:34:23.611419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:01.708 [2024-12-10 12:34:23.611441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:01.708 [2024-12-10 12:34:23.611463] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:01.708 [2024-12-10 12:34:23.611479] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:01.708 [2024-12-10 12:34:23.611493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:01.708 [2024-12-10 12:34:23.611514] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:01.708 [2024-12-10 12:34:23.611529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:01.708 12:34:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.708 12:34:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:01.708 12:34:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:02.647 [2024-12-10 12:34:24.614047] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:02.647 [2024-12-10 12:34:24.614067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:02.647 [2024-12-10 12:34:24.614080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:02.647 [2024-12-10 12:34:24.614087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:02.647 [2024-12-10 12:34:24.614094] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:02.647 [2024-12-10 12:34:24.614100] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:02.647 [2024-12-10 12:34:24.614104] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:02.647 [2024-12-10 12:34:24.614108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:02.647 [2024-12-10 12:34:24.614127] bdev_nvme.c:7268:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:02.647 [2024-12-10 12:34:24.614146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.647 [2024-12-10 12:34:24.614156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.647 [2024-12-10 12:34:24.614171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.647 [2024-12-10 12:34:24.614178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.647 [2024-12-10 12:34:24.614185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.647 [2024-12-10 12:34:24.614191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.647 [2024-12-10 12:34:24.614198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.647 [2024-12-10 12:34:24.614209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.647 [2024-12-10 12:34:24.614216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:02.647 [2024-12-10 12:34:24.614223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.647 [2024-12-10 12:34:24.614230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:02.647 [2024-12-10 12:34:24.614667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14fd430 (9): Bad file descriptor 00:26:02.647 [2024-12-10 12:34:24.615679] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:02.647 [2024-12-10 12:34:24.615691] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:02.647 12:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.024 12:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.024 12:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.024 12:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.024 12:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.024 12:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.024 12:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.024 12:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.024 12:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.024 12:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:04.024 12:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.593 [2024-12-10 12:34:26.668683] bdev_nvme.c:7517:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:04.593 [2024-12-10 12:34:26.668701] bdev_nvme.c:7603:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:04.593 [2024-12-10 12:34:26.668712] bdev_nvme.c:7480:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:04.853 [2024-12-10 12:34:26.795099] bdev_nvme.c:7446:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:04.853 [2024-12-10 12:34:26.857730] bdev_nvme.c:5662:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:04.853 [2024-12-10 12:34:26.858346] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x153b0c0:1 started. 00:26:04.853 [2024-12-10 12:34:26.859370] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:04.853 [2024-12-10 12:34:26.859401] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:04.853 [2024-12-10 12:34:26.859419] bdev_nvme.c:8313:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:04.853 [2024-12-10 12:34:26.859432] bdev_nvme.c:7336:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:04.853 [2024-12-10 12:34:26.859440] bdev_nvme.c:7295:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.853 [2024-12-10 12:34:26.866757] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x153b0c0 was disconnected and freed. delete nvme_qpair. 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1749568 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1749568 ']' 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1749568 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749568 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749568' 00:26:04.853 killing process with pid 1749568 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1749568 00:26:04.853 12:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1749568 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:05.112 rmmod nvme_tcp 00:26:05.112 rmmod nvme_fabrics 00:26:05.112 rmmod nvme_keyring 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1749539 ']' 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1749539 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1749539 ']' 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1749539 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749539 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749539' 00:26:05.112 killing process with pid 1749539 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1749539 00:26:05.112 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1749539 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.372 12:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.279 12:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:07.538 00:26:07.538 real 0m20.406s 00:26:07.538 user 0m24.589s 00:26:07.538 sys 0m5.780s 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.538 ************************************ 00:26:07.538 END TEST nvmf_discovery_remove_ifc 00:26:07.538 ************************************ 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.538 ************************************ 00:26:07.538 START TEST nvmf_identify_kernel_target 00:26:07.538 ************************************ 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:07.538 * Looking for test storage... 00:26:07.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:07.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.538 --rc genhtml_branch_coverage=1 00:26:07.538 --rc genhtml_function_coverage=1 00:26:07.538 --rc genhtml_legend=1 00:26:07.538 --rc geninfo_all_blocks=1 00:26:07.538 --rc geninfo_unexecuted_blocks=1 00:26:07.538 00:26:07.538 ' 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:07.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.538 --rc genhtml_branch_coverage=1 00:26:07.538 --rc genhtml_function_coverage=1 00:26:07.538 --rc genhtml_legend=1 00:26:07.538 --rc geninfo_all_blocks=1 00:26:07.538 --rc geninfo_unexecuted_blocks=1 00:26:07.538 00:26:07.538 ' 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:07.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.538 --rc genhtml_branch_coverage=1 00:26:07.538 --rc genhtml_function_coverage=1 00:26:07.538 --rc genhtml_legend=1 00:26:07.538 --rc geninfo_all_blocks=1 00:26:07.538 --rc geninfo_unexecuted_blocks=1 00:26:07.538 00:26:07.538 ' 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:07.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.538 --rc genhtml_branch_coverage=1 00:26:07.538 --rc genhtml_function_coverage=1 00:26:07.538 --rc genhtml_legend=1 00:26:07.538 --rc geninfo_all_blocks=1 00:26:07.538 --rc geninfo_unexecuted_blocks=1 00:26:07.538 00:26:07.538 ' 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:26:07.538 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.797 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:07.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:07.798 12:34:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:14.370 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.370 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:14.370 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:14.370 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:14.370 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:14.370 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:14.370 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:14.371 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:14.371 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:14.371 Found net devices under 0000:86:00.0: cvl_0_0 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:14.371 Found net devices under 0000:86:00.1: cvl_0_1 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:14.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:26:14.371 00:26:14.371 --- 10.0.0.2 ping statistics --- 00:26:14.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.371 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:26:14.371 00:26:14.371 --- 10.0.0.1 ping statistics --- 00:26:14.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.371 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:14.371 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:14.372 12:34:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:26:16.279 Waiting for block devices as requested 00:26:16.279 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:16.538 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:16.538 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:16.538 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:16.798 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:16.798 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:16.798 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:16.798 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:17.057 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:17.057 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:17.057 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:17.317 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:17.317 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:17.317 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:17.317 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:17.576 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:17.576 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:17.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:17.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:17.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:17.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:17.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:17.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:17.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:17.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:17.576 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:26:17.836 No valid GPT data, bailing 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:17.836 00:26:17.836 Discovery Log Number of Records 2, Generation counter 2 00:26:17.836 =====Discovery Log Entry 0====== 00:26:17.836 trtype: tcp 00:26:17.836 adrfam: ipv4 00:26:17.836 subtype: current discovery subsystem 00:26:17.836 treq: not specified, sq flow control disable supported 00:26:17.836 portid: 1 00:26:17.836 trsvcid: 4420 00:26:17.836 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:17.836 traddr: 10.0.0.1 00:26:17.836 eflags: none 00:26:17.836 sectype: none 00:26:17.836 =====Discovery Log Entry 1====== 00:26:17.836 trtype: tcp 00:26:17.836 adrfam: ipv4 00:26:17.836 subtype: nvme subsystem 00:26:17.836 treq: not specified, sq flow control disable supported 00:26:17.836 portid: 1 00:26:17.836 trsvcid: 4420 00:26:17.836 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:17.836 traddr: 10.0.0.1 00:26:17.836 eflags: none 00:26:17.836 sectype: none 00:26:17.836 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:17.836 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:17.836 ===================================================== 00:26:17.836 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:17.836 ===================================================== 00:26:17.836 Controller Capabilities/Features 00:26:17.836 ================================ 00:26:17.836 Vendor ID: 0000 00:26:17.836 Subsystem Vendor ID: 0000 00:26:17.836 Serial Number: 83b67549facb895c144f 00:26:17.836 Model Number: Linux 00:26:17.836 Firmware Version: 6.8.9-20 00:26:17.836 Recommended Arb Burst: 0 00:26:17.836 IEEE OUI Identifier: 00 00 00 00:26:17.836 Multi-path I/O 00:26:17.836 May have multiple subsystem ports: No 00:26:17.836 May have multiple controllers: No 00:26:17.836 Associated with SR-IOV VF: No 00:26:17.836 Max Data Transfer Size: Unlimited 00:26:17.836 Max Number of Namespaces: 0 00:26:17.836 Max Number of I/O Queues: 1024 00:26:17.836 NVMe Specification Version (VS): 1.3 00:26:17.836 NVMe Specification Version (Identify): 1.3 00:26:17.836 Maximum Queue Entries: 1024 00:26:17.836 Contiguous Queues Required: No 00:26:17.836 Arbitration Mechanisms Supported 00:26:17.836 Weighted Round Robin: Not Supported 00:26:17.836 Vendor Specific: Not Supported 00:26:17.836 Reset Timeout: 7500 ms 00:26:17.836 Doorbell Stride: 4 bytes 00:26:17.836 NVM Subsystem Reset: Not Supported 00:26:17.836 Command Sets Supported 00:26:17.836 NVM Command Set: Supported 00:26:17.836 Boot Partition: Not Supported 00:26:17.836 Memory Page Size Minimum: 4096 bytes 00:26:17.836 Memory Page Size Maximum: 4096 bytes 00:26:17.836 Persistent Memory Region: Not Supported 00:26:17.836 Optional Asynchronous Events Supported 00:26:17.836 Namespace Attribute Notices: Not Supported 00:26:17.836 Firmware Activation Notices: Not Supported 00:26:17.836 ANA Change Notices: Not Supported 00:26:17.836 PLE Aggregate Log Change Notices: Not Supported 00:26:17.836 LBA Status Info Alert Notices: Not Supported 00:26:17.836 EGE Aggregate Log Change Notices: Not Supported 00:26:17.836 Normal NVM Subsystem Shutdown event: Not Supported 00:26:17.836 Zone Descriptor Change Notices: Not Supported 00:26:17.836 Discovery Log Change Notices: Supported 00:26:17.836 Controller Attributes 00:26:17.836 128-bit Host Identifier: Not Supported 00:26:17.836 Non-Operational Permissive Mode: Not Supported 00:26:17.836 NVM Sets: Not Supported 00:26:17.836 Read Recovery Levels: Not Supported 00:26:17.836 Endurance Groups: Not Supported 00:26:17.836 Predictable Latency Mode: Not Supported 00:26:17.836 Traffic Based Keep ALive: Not Supported 00:26:17.836 Namespace Granularity: Not Supported 00:26:17.836 SQ Associations: Not Supported 00:26:17.836 UUID List: Not Supported 00:26:17.836 Multi-Domain Subsystem: Not Supported 00:26:17.836 Fixed Capacity Management: Not Supported 00:26:17.836 Variable Capacity Management: Not Supported 00:26:17.836 Delete Endurance Group: Not Supported 00:26:17.836 Delete NVM Set: Not Supported 00:26:17.836 Extended LBA Formats Supported: Not Supported 00:26:17.836 Flexible Data Placement Supported: Not Supported 00:26:17.836 00:26:17.836 Controller Memory Buffer Support 00:26:17.836 ================================ 00:26:17.836 Supported: No 00:26:17.836 00:26:17.836 Persistent Memory Region Support 00:26:17.836 ================================ 00:26:17.836 Supported: No 00:26:17.836 00:26:17.836 Admin Command Set Attributes 00:26:17.836 ============================ 00:26:17.836 Security Send/Receive: Not Supported 00:26:17.836 Format NVM: Not Supported 00:26:17.836 Firmware Activate/Download: Not Supported 00:26:17.836 Namespace Management: Not Supported 00:26:17.836 Device Self-Test: Not Supported 00:26:17.837 Directives: Not Supported 00:26:17.837 NVMe-MI: Not Supported 00:26:17.837 Virtualization Management: Not Supported 00:26:17.837 Doorbell Buffer Config: Not Supported 00:26:17.837 Get LBA Status Capability: Not Supported 00:26:17.837 Command & Feature Lockdown Capability: Not Supported 00:26:17.837 Abort Command Limit: 1 00:26:17.837 Async Event Request Limit: 1 00:26:17.837 Number of Firmware Slots: N/A 00:26:17.837 Firmware Slot 1 Read-Only: N/A 00:26:17.837 Firmware Activation Without Reset: N/A 00:26:17.837 Multiple Update Detection Support: N/A 00:26:17.837 Firmware Update Granularity: No Information Provided 00:26:17.837 Per-Namespace SMART Log: No 00:26:17.837 Asymmetric Namespace Access Log Page: Not Supported 00:26:17.837 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:17.837 Command Effects Log Page: Not Supported 00:26:17.837 Get Log Page Extended Data: Supported 00:26:17.837 Telemetry Log Pages: Not Supported 00:26:17.837 Persistent Event Log Pages: Not Supported 00:26:17.837 Supported Log Pages Log Page: May Support 00:26:17.837 Commands Supported & Effects Log Page: Not Supported 00:26:17.837 Feature Identifiers & Effects Log Page:May Support 00:26:17.837 NVMe-MI Commands & Effects Log Page: May Support 00:26:17.837 Data Area 4 for Telemetry Log: Not Supported 00:26:17.837 Error Log Page Entries Supported: 1 00:26:17.837 Keep Alive: Not Supported 00:26:17.837 00:26:17.837 NVM Command Set Attributes 00:26:17.837 ========================== 00:26:17.837 Submission Queue Entry Size 00:26:17.837 Max: 1 00:26:17.837 Min: 1 00:26:17.837 Completion Queue Entry Size 00:26:17.837 Max: 1 00:26:17.837 Min: 1 00:26:17.837 Number of Namespaces: 0 00:26:17.837 Compare Command: Not Supported 00:26:17.837 Write Uncorrectable Command: Not Supported 00:26:17.837 Dataset Management Command: Not Supported 00:26:17.837 Write Zeroes Command: Not Supported 00:26:17.837 Set Features Save Field: Not Supported 00:26:17.837 Reservations: Not Supported 00:26:17.837 Timestamp: Not Supported 00:26:17.837 Copy: Not Supported 00:26:17.837 Volatile Write Cache: Not Present 00:26:17.837 Atomic Write Unit (Normal): 1 00:26:17.837 Atomic Write Unit (PFail): 1 00:26:17.837 Atomic Compare & Write Unit: 1 00:26:17.837 Fused Compare & Write: Not Supported 00:26:17.837 Scatter-Gather List 00:26:17.837 SGL Command Set: Supported 00:26:17.837 SGL Keyed: Not Supported 00:26:17.837 SGL Bit Bucket Descriptor: Not Supported 00:26:17.837 SGL Metadata Pointer: Not Supported 00:26:17.837 Oversized SGL: Not Supported 00:26:17.837 SGL Metadata Address: Not Supported 00:26:17.837 SGL Offset: Supported 00:26:17.837 Transport SGL Data Block: Not Supported 00:26:17.837 Replay Protected Memory Block: Not Supported 00:26:17.837 00:26:17.837 Firmware Slot Information 00:26:17.837 ========================= 00:26:17.837 Active slot: 0 00:26:17.837 00:26:17.837 00:26:17.837 Error Log 00:26:17.837 ========= 00:26:17.837 00:26:17.837 Active Namespaces 00:26:17.837 ================= 00:26:17.837 Discovery Log Page 00:26:17.837 ================== 00:26:17.837 Generation Counter: 2 00:26:17.837 Number of Records: 2 00:26:17.837 Record Format: 0 00:26:17.837 00:26:17.837 Discovery Log Entry 0 00:26:17.837 ---------------------- 00:26:17.837 Transport Type: 3 (TCP) 00:26:17.837 Address Family: 1 (IPv4) 00:26:17.837 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:17.837 Entry Flags: 00:26:17.837 Duplicate Returned Information: 0 00:26:17.837 Explicit Persistent Connection Support for Discovery: 0 00:26:17.837 Transport Requirements: 00:26:17.837 Secure Channel: Not Specified 00:26:17.837 Port ID: 1 (0x0001) 00:26:17.837 Controller ID: 65535 (0xffff) 00:26:17.837 Admin Max SQ Size: 32 00:26:17.837 Transport Service Identifier: 4420 00:26:17.837 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:17.837 Transport Address: 10.0.0.1 00:26:17.837 Discovery Log Entry 1 00:26:17.837 ---------------------- 00:26:17.837 Transport Type: 3 (TCP) 00:26:17.837 Address Family: 1 (IPv4) 00:26:17.837 Subsystem Type: 2 (NVM Subsystem) 00:26:17.837 Entry Flags: 00:26:17.837 Duplicate Returned Information: 0 00:26:17.837 Explicit Persistent Connection Support for Discovery: 0 00:26:17.837 Transport Requirements: 00:26:17.837 Secure Channel: Not Specified 00:26:17.837 Port ID: 1 (0x0001) 00:26:17.837 Controller ID: 65535 (0xffff) 00:26:17.837 Admin Max SQ Size: 32 00:26:17.837 Transport Service Identifier: 4420 00:26:17.837 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:17.837 Transport Address: 10.0.0.1 00:26:17.837 12:34:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:18.098 get_feature(0x01) failed 00:26:18.098 get_feature(0x02) failed 00:26:18.098 get_feature(0x04) failed 00:26:18.098 ===================================================== 00:26:18.098 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:18.098 ===================================================== 00:26:18.098 Controller Capabilities/Features 00:26:18.098 ================================ 00:26:18.098 Vendor ID: 0000 00:26:18.098 Subsystem Vendor ID: 0000 00:26:18.098 Serial Number: 3a6e57cb6f0b25ed5523 00:26:18.098 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:18.098 Firmware Version: 6.8.9-20 00:26:18.098 Recommended Arb Burst: 6 00:26:18.098 IEEE OUI Identifier: 00 00 00 00:26:18.098 Multi-path I/O 00:26:18.098 May have multiple subsystem ports: Yes 00:26:18.098 May have multiple controllers: Yes 00:26:18.098 Associated with SR-IOV VF: No 00:26:18.098 Max Data Transfer Size: Unlimited 00:26:18.098 Max Number of Namespaces: 1024 00:26:18.098 Max Number of I/O Queues: 128 00:26:18.098 NVMe Specification Version (VS): 1.3 00:26:18.098 NVMe Specification Version (Identify): 1.3 00:26:18.098 Maximum Queue Entries: 1024 00:26:18.098 Contiguous Queues Required: No 00:26:18.098 Arbitration Mechanisms Supported 00:26:18.098 Weighted Round Robin: Not Supported 00:26:18.098 Vendor Specific: Not Supported 00:26:18.098 Reset Timeout: 7500 ms 00:26:18.098 Doorbell Stride: 4 bytes 00:26:18.098 NVM Subsystem Reset: Not Supported 00:26:18.098 Command Sets Supported 00:26:18.098 NVM Command Set: Supported 00:26:18.098 Boot Partition: Not Supported 00:26:18.098 Memory Page Size Minimum: 4096 bytes 00:26:18.098 Memory Page Size Maximum: 4096 bytes 00:26:18.098 Persistent Memory Region: Not Supported 00:26:18.098 Optional Asynchronous Events Supported 00:26:18.098 Namespace Attribute Notices: Supported 00:26:18.098 Firmware Activation Notices: Not Supported 00:26:18.098 ANA Change Notices: Supported 00:26:18.098 PLE Aggregate Log Change Notices: Not Supported 00:26:18.098 LBA Status Info Alert Notices: Not Supported 00:26:18.098 EGE Aggregate Log Change Notices: Not Supported 00:26:18.098 Normal NVM Subsystem Shutdown event: Not Supported 00:26:18.098 Zone Descriptor Change Notices: Not Supported 00:26:18.098 Discovery Log Change Notices: Not Supported 00:26:18.098 Controller Attributes 00:26:18.098 128-bit Host Identifier: Supported 00:26:18.098 Non-Operational Permissive Mode: Not Supported 00:26:18.098 NVM Sets: Not Supported 00:26:18.098 Read Recovery Levels: Not Supported 00:26:18.098 Endurance Groups: Not Supported 00:26:18.098 Predictable Latency Mode: Not Supported 00:26:18.098 Traffic Based Keep ALive: Supported 00:26:18.098 Namespace Granularity: Not Supported 00:26:18.098 SQ Associations: Not Supported 00:26:18.098 UUID List: Not Supported 00:26:18.098 Multi-Domain Subsystem: Not Supported 00:26:18.098 Fixed Capacity Management: Not Supported 00:26:18.098 Variable Capacity Management: Not Supported 00:26:18.098 Delete Endurance Group: Not Supported 00:26:18.098 Delete NVM Set: Not Supported 00:26:18.098 Extended LBA Formats Supported: Not Supported 00:26:18.098 Flexible Data Placement Supported: Not Supported 00:26:18.098 00:26:18.098 Controller Memory Buffer Support 00:26:18.098 ================================ 00:26:18.098 Supported: No 00:26:18.098 00:26:18.098 Persistent Memory Region Support 00:26:18.098 ================================ 00:26:18.098 Supported: No 00:26:18.098 00:26:18.098 Admin Command Set Attributes 00:26:18.098 ============================ 00:26:18.098 Security Send/Receive: Not Supported 00:26:18.098 Format NVM: Not Supported 00:26:18.098 Firmware Activate/Download: Not Supported 00:26:18.098 Namespace Management: Not Supported 00:26:18.098 Device Self-Test: Not Supported 00:26:18.098 Directives: Not Supported 00:26:18.098 NVMe-MI: Not Supported 00:26:18.098 Virtualization Management: Not Supported 00:26:18.098 Doorbell Buffer Config: Not Supported 00:26:18.098 Get LBA Status Capability: Not Supported 00:26:18.098 Command & Feature Lockdown Capability: Not Supported 00:26:18.098 Abort Command Limit: 4 00:26:18.098 Async Event Request Limit: 4 00:26:18.098 Number of Firmware Slots: N/A 00:26:18.098 Firmware Slot 1 Read-Only: N/A 00:26:18.098 Firmware Activation Without Reset: N/A 00:26:18.098 Multiple Update Detection Support: N/A 00:26:18.098 Firmware Update Granularity: No Information Provided 00:26:18.098 Per-Namespace SMART Log: Yes 00:26:18.098 Asymmetric Namespace Access Log Page: Supported 00:26:18.098 ANA Transition Time : 10 sec 00:26:18.098 00:26:18.098 Asymmetric Namespace Access Capabilities 00:26:18.098 ANA Optimized State : Supported 00:26:18.098 ANA Non-Optimized State : Supported 00:26:18.098 ANA Inaccessible State : Supported 00:26:18.098 ANA Persistent Loss State : Supported 00:26:18.098 ANA Change State : Supported 00:26:18.098 ANAGRPID is not changed : No 00:26:18.098 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:18.098 00:26:18.098 ANA Group Identifier Maximum : 128 00:26:18.098 Number of ANA Group Identifiers : 128 00:26:18.098 Max Number of Allowed Namespaces : 1024 00:26:18.098 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:18.098 Command Effects Log Page: Supported 00:26:18.098 Get Log Page Extended Data: Supported 00:26:18.098 Telemetry Log Pages: Not Supported 00:26:18.098 Persistent Event Log Pages: Not Supported 00:26:18.098 Supported Log Pages Log Page: May Support 00:26:18.098 Commands Supported & Effects Log Page: Not Supported 00:26:18.098 Feature Identifiers & Effects Log Page:May Support 00:26:18.098 NVMe-MI Commands & Effects Log Page: May Support 00:26:18.098 Data Area 4 for Telemetry Log: Not Supported 00:26:18.098 Error Log Page Entries Supported: 128 00:26:18.098 Keep Alive: Supported 00:26:18.098 Keep Alive Granularity: 1000 ms 00:26:18.098 00:26:18.098 NVM Command Set Attributes 00:26:18.098 ========================== 00:26:18.098 Submission Queue Entry Size 00:26:18.098 Max: 64 00:26:18.098 Min: 64 00:26:18.098 Completion Queue Entry Size 00:26:18.098 Max: 16 00:26:18.098 Min: 16 00:26:18.098 Number of Namespaces: 1024 00:26:18.098 Compare Command: Not Supported 00:26:18.098 Write Uncorrectable Command: Not Supported 00:26:18.098 Dataset Management Command: Supported 00:26:18.098 Write Zeroes Command: Supported 00:26:18.098 Set Features Save Field: Not Supported 00:26:18.098 Reservations: Not Supported 00:26:18.098 Timestamp: Not Supported 00:26:18.098 Copy: Not Supported 00:26:18.098 Volatile Write Cache: Present 00:26:18.098 Atomic Write Unit (Normal): 1 00:26:18.098 Atomic Write Unit (PFail): 1 00:26:18.098 Atomic Compare & Write Unit: 1 00:26:18.098 Fused Compare & Write: Not Supported 00:26:18.098 Scatter-Gather List 00:26:18.098 SGL Command Set: Supported 00:26:18.098 SGL Keyed: Not Supported 00:26:18.098 SGL Bit Bucket Descriptor: Not Supported 00:26:18.099 SGL Metadata Pointer: Not Supported 00:26:18.099 Oversized SGL: Not Supported 00:26:18.099 SGL Metadata Address: Not Supported 00:26:18.099 SGL Offset: Supported 00:26:18.099 Transport SGL Data Block: Not Supported 00:26:18.099 Replay Protected Memory Block: Not Supported 00:26:18.099 00:26:18.099 Firmware Slot Information 00:26:18.099 ========================= 00:26:18.099 Active slot: 0 00:26:18.099 00:26:18.099 Asymmetric Namespace Access 00:26:18.099 =========================== 00:26:18.099 Change Count : 0 00:26:18.099 Number of ANA Group Descriptors : 1 00:26:18.099 ANA Group Descriptor : 0 00:26:18.099 ANA Group ID : 1 00:26:18.099 Number of NSID Values : 1 00:26:18.099 Change Count : 0 00:26:18.099 ANA State : 1 00:26:18.099 Namespace Identifier : 1 00:26:18.099 00:26:18.099 Commands Supported and Effects 00:26:18.099 ============================== 00:26:18.099 Admin Commands 00:26:18.099 -------------- 00:26:18.099 Get Log Page (02h): Supported 00:26:18.099 Identify (06h): Supported 00:26:18.099 Abort (08h): Supported 00:26:18.099 Set Features (09h): Supported 00:26:18.099 Get Features (0Ah): Supported 00:26:18.099 Asynchronous Event Request (0Ch): Supported 00:26:18.099 Keep Alive (18h): Supported 00:26:18.099 I/O Commands 00:26:18.099 ------------ 00:26:18.099 Flush (00h): Supported 00:26:18.099 Write (01h): Supported LBA-Change 00:26:18.099 Read (02h): Supported 00:26:18.099 Write Zeroes (08h): Supported LBA-Change 00:26:18.099 Dataset Management (09h): Supported 00:26:18.099 00:26:18.099 Error Log 00:26:18.099 ========= 00:26:18.099 Entry: 0 00:26:18.099 Error Count: 0x3 00:26:18.099 Submission Queue Id: 0x0 00:26:18.099 Command Id: 0x5 00:26:18.099 Phase Bit: 0 00:26:18.099 Status Code: 0x2 00:26:18.099 Status Code Type: 0x0 00:26:18.099 Do Not Retry: 1 00:26:18.099 Error Location: 0x28 00:26:18.099 LBA: 0x0 00:26:18.099 Namespace: 0x0 00:26:18.099 Vendor Log Page: 0x0 00:26:18.099 ----------- 00:26:18.099 Entry: 1 00:26:18.099 Error Count: 0x2 00:26:18.099 Submission Queue Id: 0x0 00:26:18.099 Command Id: 0x5 00:26:18.099 Phase Bit: 0 00:26:18.099 Status Code: 0x2 00:26:18.099 Status Code Type: 0x0 00:26:18.099 Do Not Retry: 1 00:26:18.099 Error Location: 0x28 00:26:18.099 LBA: 0x0 00:26:18.099 Namespace: 0x0 00:26:18.099 Vendor Log Page: 0x0 00:26:18.099 ----------- 00:26:18.099 Entry: 2 00:26:18.099 Error Count: 0x1 00:26:18.099 Submission Queue Id: 0x0 00:26:18.099 Command Id: 0x4 00:26:18.099 Phase Bit: 0 00:26:18.099 Status Code: 0x2 00:26:18.099 Status Code Type: 0x0 00:26:18.099 Do Not Retry: 1 00:26:18.099 Error Location: 0x28 00:26:18.099 LBA: 0x0 00:26:18.099 Namespace: 0x0 00:26:18.099 Vendor Log Page: 0x0 00:26:18.099 00:26:18.099 Number of Queues 00:26:18.099 ================ 00:26:18.099 Number of I/O Submission Queues: 128 00:26:18.099 Number of I/O Completion Queues: 128 00:26:18.099 00:26:18.099 ZNS Specific Controller Data 00:26:18.099 ============================ 00:26:18.099 Zone Append Size Limit: 0 00:26:18.099 00:26:18.099 00:26:18.099 Active Namespaces 00:26:18.099 ================= 00:26:18.099 get_feature(0x05) failed 00:26:18.099 Namespace ID:1 00:26:18.099 Command Set Identifier: NVM (00h) 00:26:18.099 Deallocate: Supported 00:26:18.099 Deallocated/Unwritten Error: Not Supported 00:26:18.099 Deallocated Read Value: Unknown 00:26:18.099 Deallocate in Write Zeroes: Not Supported 00:26:18.099 Deallocated Guard Field: 0xFFFF 00:26:18.099 Flush: Supported 00:26:18.099 Reservation: Not Supported 00:26:18.099 Namespace Sharing Capabilities: Multiple Controllers 00:26:18.099 Size (in LBAs): 1953525168 (931GiB) 00:26:18.099 Capacity (in LBAs): 1953525168 (931GiB) 00:26:18.099 Utilization (in LBAs): 1953525168 (931GiB) 00:26:18.099 UUID: 7aca8c76-ef32-4732-8424-b38e018b4cd4 00:26:18.099 Thin Provisioning: Not Supported 00:26:18.099 Per-NS Atomic Units: Yes 00:26:18.099 Atomic Boundary Size (Normal): 0 00:26:18.099 Atomic Boundary Size (PFail): 0 00:26:18.099 Atomic Boundary Offset: 0 00:26:18.099 NGUID/EUI64 Never Reused: No 00:26:18.099 ANA group ID: 1 00:26:18.099 Namespace Write Protected: No 00:26:18.099 Number of LBA Formats: 1 00:26:18.099 Current LBA Format: LBA Format #00 00:26:18.099 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:18.099 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:18.099 rmmod nvme_tcp 00:26:18.099 rmmod nvme_fabrics 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.099 12:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:20.636 12:34:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:26:23.243 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:23.243 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:24.266 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:24.266 00:26:24.266 real 0m16.639s 00:26:24.266 user 0m4.449s 00:26:24.266 sys 0m8.593s 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:24.266 ************************************ 00:26:24.266 END TEST nvmf_identify_kernel_target 00:26:24.266 ************************************ 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.266 ************************************ 00:26:24.266 START TEST nvmf_auth_host 00:26:24.266 ************************************ 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:24.266 * Looking for test storage... 00:26:24.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:24.266 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:24.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.267 --rc genhtml_branch_coverage=1 00:26:24.267 --rc genhtml_function_coverage=1 00:26:24.267 --rc genhtml_legend=1 00:26:24.267 --rc geninfo_all_blocks=1 00:26:24.267 --rc geninfo_unexecuted_blocks=1 00:26:24.267 00:26:24.267 ' 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:24.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.267 --rc genhtml_branch_coverage=1 00:26:24.267 --rc genhtml_function_coverage=1 00:26:24.267 --rc genhtml_legend=1 00:26:24.267 --rc geninfo_all_blocks=1 00:26:24.267 --rc geninfo_unexecuted_blocks=1 00:26:24.267 00:26:24.267 ' 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:24.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.267 --rc genhtml_branch_coverage=1 00:26:24.267 --rc genhtml_function_coverage=1 00:26:24.267 --rc genhtml_legend=1 00:26:24.267 --rc geninfo_all_blocks=1 00:26:24.267 --rc geninfo_unexecuted_blocks=1 00:26:24.267 00:26:24.267 ' 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:24.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.267 --rc genhtml_branch_coverage=1 00:26:24.267 --rc genhtml_function_coverage=1 00:26:24.267 --rc genhtml_legend=1 00:26:24.267 --rc geninfo_all_blocks=1 00:26:24.267 --rc geninfo_unexecuted_blocks=1 00:26:24.267 00:26:24.267 ' 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.267 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.526 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:24.526 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:24.526 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.526 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.526 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.526 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.526 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:24.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:24.527 12:34:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:31.102 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:31.102 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:31.102 Found net devices under 0000:86:00.0: cvl_0_0 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:31.102 Found net devices under 0000:86:00.1: cvl_0_1 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.102 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:31.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:26:31.103 00:26:31.103 --- 10.0.0.2 ping statistics --- 00:26:31.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.103 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:26:31.103 00:26:31.103 --- 10.0.0.1 ping statistics --- 00:26:31.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.103 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1761547 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1761547 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1761547 ']' 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0266b44576a12e0ab5277aa70d16a708 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.s3q 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0266b44576a12e0ab5277aa70d16a708 0 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0266b44576a12e0ab5277aa70d16a708 0 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0266b44576a12e0ab5277aa70d16a708 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.s3q 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.s3q 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.s3q 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=18ec0bbf4e72672dc81ac5ee14e5fea1deee099ee37ac95e1f646428ec66c0dc 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.vc5 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 18ec0bbf4e72672dc81ac5ee14e5fea1deee099ee37ac95e1f646428ec66c0dc 3 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 18ec0bbf4e72672dc81ac5ee14e5fea1deee099ee37ac95e1f646428ec66c0dc 3 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=18ec0bbf4e72672dc81ac5ee14e5fea1deee099ee37ac95e1f646428ec66c0dc 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.vc5 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.vc5 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.vc5 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2df2fc2d1e7b1bbab8d21e6c3b6ffef066c46d1e830a3730 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Jxe 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2df2fc2d1e7b1bbab8d21e6c3b6ffef066c46d1e830a3730 0 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2df2fc2d1e7b1bbab8d21e6c3b6ffef066c46d1e830a3730 0 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2df2fc2d1e7b1bbab8d21e6c3b6ffef066c46d1e830a3730 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Jxe 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Jxe 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Jxe 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8c9bde9078aafa93a3354f6ebf6b00a87358b049d137cc55 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:31.103 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WTm 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8c9bde9078aafa93a3354f6ebf6b00a87358b049d137cc55 2 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8c9bde9078aafa93a3354f6ebf6b00a87358b049d137cc55 2 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8c9bde9078aafa93a3354f6ebf6b00a87358b049d137cc55 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WTm 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WTm 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.WTm 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4cbdcd5f75d2662377ecf35501293af3 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HuU 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4cbdcd5f75d2662377ecf35501293af3 1 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4cbdcd5f75d2662377ecf35501293af3 1 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4cbdcd5f75d2662377ecf35501293af3 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HuU 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HuU 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.HuU 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a11b4f6082332629326cb1877d2fdaed 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.z7n 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a11b4f6082332629326cb1877d2fdaed 1 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a11b4f6082332629326cb1877d2fdaed 1 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a11b4f6082332629326cb1877d2fdaed 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.z7n 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.z7n 00:26:31.104 12:34:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.z7n 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5334776a4439dfff30fabfda3aff3e026b4b94ea4ffc2dca 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.8Tn 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5334776a4439dfff30fabfda3aff3e026b4b94ea4ffc2dca 2 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5334776a4439dfff30fabfda3aff3e026b4b94ea4ffc2dca 2 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5334776a4439dfff30fabfda3aff3e026b4b94ea4ffc2dca 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.8Tn 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.8Tn 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.8Tn 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7cc50a9ce7e212b5008bd1c3e0913d5b 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7Fq 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7cc50a9ce7e212b5008bd1c3e0913d5b 0 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7cc50a9ce7e212b5008bd1c3e0913d5b 0 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7cc50a9ce7e212b5008bd1c3e0913d5b 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7Fq 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7Fq 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.7Fq 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7e1b811097a205f46b193f33fe9e9c101996f4a375c88a5d25a5fb4ea0103a02 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.eav 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7e1b811097a205f46b193f33fe9e9c101996f4a375c88a5d25a5fb4ea0103a02 3 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7e1b811097a205f46b193f33fe9e9c101996f4a375c88a5d25a5fb4ea0103a02 3 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7e1b811097a205f46b193f33fe9e9c101996f4a375c88a5d25a5fb4ea0103a02 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.eav 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.eav 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.eav 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1761547 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1761547 ']' 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.104 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.105 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.105 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.105 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.s3q 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.vc5 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vc5 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Jxe 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.WTm ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WTm 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.HuU 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.z7n ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z7n 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.8Tn 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.7Fq ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.7Fq 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.eav 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:31.364 12:34:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:26:34.654 Waiting for block devices as requested 00:26:34.654 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:34.654 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:34.654 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:34.654 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:34.654 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:34.654 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:34.654 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:34.654 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:34.654 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:34.912 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:34.913 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:34.913 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:34.913 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:35.171 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:35.171 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:35.171 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:35.429 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:35.997 12:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:35.997 12:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:35.997 12:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:35.997 12:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:35.997 12:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:35.997 12:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:35.997 12:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:35.997 12:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:35.997 12:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:26:35.997 No valid GPT data, bailing 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:35.997 00:26:35.997 Discovery Log Number of Records 2, Generation counter 2 00:26:35.997 =====Discovery Log Entry 0====== 00:26:35.997 trtype: tcp 00:26:35.997 adrfam: ipv4 00:26:35.997 subtype: current discovery subsystem 00:26:35.997 treq: not specified, sq flow control disable supported 00:26:35.997 portid: 1 00:26:35.997 trsvcid: 4420 00:26:35.997 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:35.997 traddr: 10.0.0.1 00:26:35.997 eflags: none 00:26:35.997 sectype: none 00:26:35.997 =====Discovery Log Entry 1====== 00:26:35.997 trtype: tcp 00:26:35.997 adrfam: ipv4 00:26:35.997 subtype: nvme subsystem 00:26:35.997 treq: not specified, sq flow control disable supported 00:26:35.997 portid: 1 00:26:35.997 trsvcid: 4420 00:26:35.997 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:35.997 traddr: 10.0.0.1 00:26:35.997 eflags: none 00:26:35.997 sectype: none 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:35.997 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.998 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.257 nvme0n1 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.257 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.517 nvme0n1 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.517 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.776 nvme0n1 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.776 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.777 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.777 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.777 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.777 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.777 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.777 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.777 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.035 nvme0n1 00:26:37.035 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.035 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.035 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.035 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.035 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.035 12:34:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.035 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.036 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.036 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.036 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.036 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.036 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.036 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.036 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:37.036 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.036 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.036 nvme0n1 00:26:37.036 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.294 nvme0n1 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.294 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.553 nvme0n1 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.553 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:37.812 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.813 nvme0n1 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.813 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.072 12:34:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.072 nvme0n1 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.072 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.331 nvme0n1 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.331 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.590 nvme0n1 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.590 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.849 12:35:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.108 nvme0n1 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.108 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.367 nvme0n1 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:39.367 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.368 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.627 nvme0n1 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.627 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.886 nvme0n1 00:26:39.886 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.886 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.886 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.886 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.886 12:35:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.886 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.886 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.886 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.886 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.886 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.145 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.145 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.145 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:40.145 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.145 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.145 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:40.145 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.146 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.405 nvme0n1 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.405 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.664 nvme0n1 00:26:40.664 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.664 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.664 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.664 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.664 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.664 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.664 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.664 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.664 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.664 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.923 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.924 12:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.183 nvme0n1 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.183 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.752 nvme0n1 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.752 12:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.011 nvme0n1 00:26:42.011 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.011 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.011 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.011 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.011 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.011 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.011 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.011 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.011 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.011 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.270 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.530 nvme0n1 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.530 12:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.100 nvme0n1 00:26:43.100 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.100 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.100 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.100 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.100 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.100 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.361 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.929 nvme0n1 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.929 12:35:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.498 nvme0n1 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.498 12:35:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.066 nvme0n1 00:26:45.066 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.066 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.066 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.066 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.066 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.066 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.325 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.892 nvme0n1 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.892 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.893 12:35:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.893 nvme0n1 00:26:45.893 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.893 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.893 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.893 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.893 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.152 nvme0n1 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.152 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.411 nvme0n1 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.411 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.671 nvme0n1 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.671 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.931 nvme0n1 00:26:46.931 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.931 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.931 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.931 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.931 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.931 12:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.931 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.190 nvme0n1 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.190 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.191 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.191 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.191 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.191 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.191 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.191 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.191 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.191 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:47.191 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.191 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.450 nvme0n1 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.450 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.710 nvme0n1 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.710 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.969 nvme0n1 00:26:47.969 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.969 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.969 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.969 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.969 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.969 12:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.969 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.229 nvme0n1 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.229 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.489 nvme0n1 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.489 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.748 nvme0n1 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.748 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.008 12:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.266 nvme0n1 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.266 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.267 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.526 nvme0n1 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.526 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.785 nvme0n1 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.785 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.044 12:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.304 nvme0n1 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.304 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.873 nvme0n1 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.873 12:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.132 nvme0n1 00:26:51.132 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.132 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.132 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.132 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.132 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.132 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.132 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.132 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.132 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.132 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.391 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.392 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.651 nvme0n1 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.651 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.652 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.652 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.652 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.652 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.652 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.652 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.652 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.652 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.652 12:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.220 nvme0n1 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.220 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.788 nvme0n1 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.788 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.789 12:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.356 nvme0n1 00:26:53.356 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.356 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.356 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.356 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.356 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.356 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.356 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.356 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.356 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.356 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.616 12:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.184 nvme0n1 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.184 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.752 nvme0n1 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:54.752 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.753 12:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.320 nvme0n1 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.320 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.580 nvme0n1 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.580 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.840 nvme0n1 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.840 12:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.099 nvme0n1 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.099 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.100 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.359 nvme0n1 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.359 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.619 nvme0n1 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.619 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.878 nvme0n1 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:56.878 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.879 12:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.879 nvme0n1 00:26:56.879 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.879 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.879 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.879 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.879 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.138 nvme0n1 00:26:57.138 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:57.395 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.396 nvme0n1 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.396 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:57.654 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.655 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.655 nvme0n1 00:26:57.655 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.655 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.655 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.655 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.655 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.655 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:57.913 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.914 12:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.173 nvme0n1 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.173 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.431 nvme0n1 00:26:58.431 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.431 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.431 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.431 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.431 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.431 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.431 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.431 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.431 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.431 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.432 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.690 nvme0n1 00:26:58.690 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.690 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.690 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.690 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.690 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.690 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.690 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.690 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.690 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.690 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.948 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.948 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.948 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:58.948 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.948 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:58.948 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.949 12:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.207 nvme0n1 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.207 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.208 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.208 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.208 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.208 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.208 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.208 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:59.208 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.208 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.467 nvme0n1 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.467 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.034 nvme0n1 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.034 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.035 12:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.293 nvme0n1 00:27:00.293 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.293 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.293 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.293 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.293 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.293 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.293 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.293 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.294 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.861 nvme0n1 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.861 12:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.119 nvme0n1 00:27:01.119 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.119 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.119 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.119 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.119 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.119 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.378 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.379 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.637 nvme0n1 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDI2NmI0NDU3NmExMmUwYWI1Mjc3YWE3MGQxNmE3MDhopU9h: 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: ]] 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MThlYzBiYmY0ZTcyNjcyZGM4MWFjNWVlMTRlNWZlYTFkZWVlMDk5ZWUzN2FjOTVlMWY2NDY0MjhlYzY2YzBkY8lqT/I=: 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.637 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.638 12:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.205 nvme0n1 00:27:02.205 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.205 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.205 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.205 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.205 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.205 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.464 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.031 nvme0n1 00:27:03.031 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.031 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.031 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.031 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.031 12:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.031 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.032 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.598 nvme0n1 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTMzNDc3NmE0NDM5ZGZmZjMwZmFiZmRhM2FmZjNlMDI2YjRiOTRlYTRmZmMyZGNhxgMw+g==: 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: ]] 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NjNTBhOWNlN2UyMTJiNTAwOGJkMWMzZTA5MTNkNWL9ZkMy: 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:03.598 12:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.165 nvme0n1 00:27:04.165 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.165 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.165 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.165 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.165 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.165 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2UxYjgxMTA5N2EyMDVmNDZiMTkzZjMzZmU5ZTljMTAxOTk2ZjRhMzc1Yzg4YTVkMjVhNWZiNGVhMDEwM2EwMtg7ymw=: 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.424 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.425 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.992 nvme0n1 00:27:04.992 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.992 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.992 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.992 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.992 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.992 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.992 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.992 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.992 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.992 12:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.992 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.992 request: 00:27:04.992 { 00:27:04.992 "name": "nvme0", 00:27:04.992 "trtype": "tcp", 00:27:04.992 "traddr": "10.0.0.1", 00:27:04.992 "adrfam": "ipv4", 00:27:04.992 "trsvcid": "4420", 00:27:04.992 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:04.992 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:04.992 "prchk_reftag": false, 00:27:04.992 "prchk_guard": false, 00:27:04.992 "hdgst": false, 00:27:04.992 "ddgst": false, 00:27:04.992 "allow_unrecognized_csi": false, 00:27:04.993 "method": "bdev_nvme_attach_controller", 00:27:04.993 "req_id": 1 00:27:04.993 } 00:27:04.993 Got JSON-RPC error response 00:27:04.993 response: 00:27:04.993 { 00:27:04.993 "code": -5, 00:27:04.993 "message": "Input/output error" 00:27:04.993 } 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.993 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.252 request: 00:27:05.252 { 00:27:05.252 "name": "nvme0", 00:27:05.252 "trtype": "tcp", 00:27:05.252 "traddr": "10.0.0.1", 00:27:05.252 "adrfam": "ipv4", 00:27:05.252 "trsvcid": "4420", 00:27:05.252 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:05.252 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:05.252 "prchk_reftag": false, 00:27:05.252 "prchk_guard": false, 00:27:05.252 "hdgst": false, 00:27:05.252 "ddgst": false, 00:27:05.252 "dhchap_key": "key2", 00:27:05.252 "allow_unrecognized_csi": false, 00:27:05.252 "method": "bdev_nvme_attach_controller", 00:27:05.252 "req_id": 1 00:27:05.252 } 00:27:05.252 Got JSON-RPC error response 00:27:05.252 response: 00:27:05.252 { 00:27:05.252 "code": -5, 00:27:05.252 "message": "Input/output error" 00:27:05.252 } 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.252 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.252 request: 00:27:05.252 { 00:27:05.252 "name": "nvme0", 00:27:05.252 "trtype": "tcp", 00:27:05.252 "traddr": "10.0.0.1", 00:27:05.252 "adrfam": "ipv4", 00:27:05.252 "trsvcid": "4420", 00:27:05.252 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:05.252 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:05.252 "prchk_reftag": false, 00:27:05.252 "prchk_guard": false, 00:27:05.252 "hdgst": false, 00:27:05.252 "ddgst": false, 00:27:05.252 "dhchap_key": "key1", 00:27:05.252 "dhchap_ctrlr_key": "ckey2", 00:27:05.252 "allow_unrecognized_csi": false, 00:27:05.252 "method": "bdev_nvme_attach_controller", 00:27:05.252 "req_id": 1 00:27:05.252 } 00:27:05.252 Got JSON-RPC error response 00:27:05.252 response: 00:27:05.252 { 00:27:05.252 "code": -5, 00:27:05.252 "message": "Input/output error" 00:27:05.252 } 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.253 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.512 nvme0n1 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.512 request: 00:27:05.512 { 00:27:05.512 "name": "nvme0", 00:27:05.512 "dhchap_key": "key1", 00:27:05.512 "dhchap_ctrlr_key": "ckey2", 00:27:05.512 "method": "bdev_nvme_set_keys", 00:27:05.512 "req_id": 1 00:27:05.512 } 00:27:05.512 Got JSON-RPC error response 00:27:05.512 response: 00:27:05.512 { 00:27:05.512 "code": -13, 00:27:05.512 "message": "Permission denied" 00:27:05.512 } 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.512 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.771 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.771 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:05.771 12:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:06.706 12:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.706 12:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:06.706 12:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.706 12:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.706 12:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.706 12:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:06.706 12:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmRmMmZjMmQxZTdiMWJiYWI4ZDIxZTZjM2I2ZmZlZjA2NmM0NmQxZTgzMGEzNzMwUeAcFg==: 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: ]] 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM5YmRlOTA3OGFhZmE5M2EzMzU0ZjZlYmY2YjAwYTg3MzU4YjA0OWQxMzdjYzU10piEiw==: 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.641 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:07.642 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:07.642 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:07.642 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:07.642 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.642 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.900 nvme0n1 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGNiZGNkNWY3NWQyNjYyMzc3ZWNmMzU1MDEyOTNhZjPKdBF9: 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: ]] 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTExYjRmNjA4MjMzMjYyOTMyNmNiMTg3N2QyZmRhZWRU8kQE: 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.900 12:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.900 request: 00:27:07.900 { 00:27:07.901 "name": "nvme0", 00:27:07.901 "dhchap_key": "key2", 00:27:07.901 "dhchap_ctrlr_key": "ckey1", 00:27:07.901 "method": "bdev_nvme_set_keys", 00:27:07.901 "req_id": 1 00:27:07.901 } 00:27:07.901 Got JSON-RPC error response 00:27:07.901 response: 00:27:07.901 { 00:27:07.901 "code": -13, 00:27:07.901 "message": "Permission denied" 00:27:07.901 } 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:07.901 12:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.277 rmmod nvme_tcp 00:27:09.277 rmmod nvme_fabrics 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1761547 ']' 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1761547 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1761547 ']' 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1761547 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1761547 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1761547' 00:27:09.277 killing process with pid 1761547 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1761547 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1761547 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.277 12:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.813 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:11.813 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:11.813 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:11.813 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:11.813 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:11.813 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:11.813 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:11.814 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:11.814 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:11.814 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:11.814 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:11.814 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:11.814 12:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:27:14.349 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:14.349 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:15.286 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:27:15.286 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.s3q /tmp/spdk.key-null.Jxe /tmp/spdk.key-sha256.HuU /tmp/spdk.key-sha384.8Tn /tmp/spdk.key-sha512.eav /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log 00:27:15.286 12:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:27:18.576 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:18.576 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:18.576 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:18.576 00:27:18.576 real 0m54.034s 00:27:18.576 user 0m48.704s 00:27:18.576 sys 0m12.681s 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.576 ************************************ 00:27:18.576 END TEST nvmf_auth_host 00:27:18.576 ************************************ 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.576 ************************************ 00:27:18.576 START TEST nvmf_digest 00:27:18.576 ************************************ 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:18.576 * Looking for test storage... 00:27:18.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:18.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.576 --rc genhtml_branch_coverage=1 00:27:18.576 --rc genhtml_function_coverage=1 00:27:18.576 --rc genhtml_legend=1 00:27:18.576 --rc geninfo_all_blocks=1 00:27:18.576 --rc geninfo_unexecuted_blocks=1 00:27:18.576 00:27:18.576 ' 00:27:18.576 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:18.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.576 --rc genhtml_branch_coverage=1 00:27:18.576 --rc genhtml_function_coverage=1 00:27:18.576 --rc genhtml_legend=1 00:27:18.576 --rc geninfo_all_blocks=1 00:27:18.577 --rc geninfo_unexecuted_blocks=1 00:27:18.577 00:27:18.577 ' 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:18.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.577 --rc genhtml_branch_coverage=1 00:27:18.577 --rc genhtml_function_coverage=1 00:27:18.577 --rc genhtml_legend=1 00:27:18.577 --rc geninfo_all_blocks=1 00:27:18.577 --rc geninfo_unexecuted_blocks=1 00:27:18.577 00:27:18.577 ' 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:18.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.577 --rc genhtml_branch_coverage=1 00:27:18.577 --rc genhtml_function_coverage=1 00:27:18.577 --rc genhtml_legend=1 00:27:18.577 --rc geninfo_all_blocks=1 00:27:18.577 --rc geninfo_unexecuted_blocks=1 00:27:18.577 00:27:18.577 ' 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:18.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:18.577 12:35:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:25.151 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:25.152 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:25.152 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:25.152 Found net devices under 0000:86:00.0: cvl_0_0 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:25.152 Found net devices under 0000:86:00.1: cvl_0_1 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:25.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:25.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:27:25.152 00:27:25.152 --- 10.0.0.2 ping statistics --- 00:27:25.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.152 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:25.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:25.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:27:25.152 00:27:25.152 --- 10.0.0.1 ping statistics --- 00:27:25.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:25.152 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:25.152 ************************************ 00:27:25.152 START TEST nvmf_digest_clean 00:27:25.152 ************************************ 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1775294 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1775294 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1775294 ']' 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.152 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:25.152 [2024-12-10 12:35:46.528371] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:25.153 [2024-12-10 12:35:46.528413] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.153 [2024-12-10 12:35:46.608338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.153 [2024-12-10 12:35:46.648366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.153 [2024-12-10 12:35:46.648400] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.153 [2024-12-10 12:35:46.648407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.153 [2024-12-10 12:35:46.648413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.153 [2024-12-10 12:35:46.648418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.153 [2024-12-10 12:35:46.648964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:25.153 null0 00:27:25.153 [2024-12-10 12:35:46.804246] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:25.153 [2024-12-10 12:35:46.828441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1775319 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1775319 /var/tmp/bperf.sock 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1775319 ']' 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:25.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:25.153 12:35:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:25.153 [2024-12-10 12:35:46.879654] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:25.153 [2024-12-10 12:35:46.879696] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775319 ] 00:27:25.153 [2024-12-10 12:35:46.953344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.153 [2024-12-10 12:35:46.994231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.153 12:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.153 12:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:25.153 12:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:25.153 12:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:25.153 12:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:25.153 12:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.153 12:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:25.721 nvme0n1 00:27:25.721 12:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:25.721 12:35:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:25.721 Running I/O for 2 seconds... 00:27:28.034 24553.00 IOPS, 95.91 MiB/s [2024-12-10T11:35:50.202Z] 24672.00 IOPS, 96.38 MiB/s 00:27:28.035 Latency(us) 00:27:28.035 [2024-12-10T11:35:50.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.035 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:28.035 nvme0n1 : 2.05 24166.92 94.40 0.00 0.00 5187.85 2265.27 46274.11 00:27:28.035 [2024-12-10T11:35:50.203Z] =================================================================================================================== 00:27:28.035 [2024-12-10T11:35:50.203Z] Total : 24166.92 94.40 0.00 0.00 5187.85 2265.27 46274.11 00:27:28.035 { 00:27:28.035 "results": [ 00:27:28.035 { 00:27:28.035 "job": "nvme0n1", 00:27:28.035 "core_mask": "0x2", 00:27:28.035 "workload": "randread", 00:27:28.035 "status": "finished", 00:27:28.035 "queue_depth": 128, 00:27:28.035 "io_size": 4096, 00:27:28.035 "runtime": 2.047096, 00:27:28.035 "iops": 24166.91742839613, 00:27:28.035 "mibps": 94.40202120467238, 00:27:28.035 "io_failed": 0, 00:27:28.035 "io_timeout": 0, 00:27:28.035 "avg_latency_us": 5187.853751054616, 00:27:28.035 "min_latency_us": 2265.2660869565216, 00:27:28.035 "max_latency_us": 46274.114782608696 00:27:28.035 } 00:27:28.035 ], 00:27:28.035 "core_count": 1 00:27:28.035 } 00:27:28.035 12:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:28.035 12:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:28.035 12:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:28.035 12:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:28.035 | select(.opcode=="crc32c") 00:27:28.035 | "\(.module_name) \(.executed)"' 00:27:28.035 12:35:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1775319 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1775319 ']' 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1775319 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1775319 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1775319' 00:27:28.035 killing process with pid 1775319 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1775319 00:27:28.035 Received shutdown signal, test time was about 2.000000 seconds 00:27:28.035 00:27:28.035 Latency(us) 00:27:28.035 [2024-12-10T11:35:50.203Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:28.035 [2024-12-10T11:35:50.203Z] =================================================================================================================== 00:27:28.035 [2024-12-10T11:35:50.203Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:28.035 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1775319 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1775793 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1775793 /var/tmp/bperf.sock 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1775793 ']' 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:28.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:28.294 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:28.294 [2024-12-10 12:35:50.352870] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:28.294 [2024-12-10 12:35:50.352920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1775793 ] 00:27:28.294 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:28.294 Zero copy mechanism will not be used. 00:27:28.294 [2024-12-10 12:35:50.431905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.553 [2024-12-10 12:35:50.469912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.553 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.553 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:28.553 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:28.553 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:28.553 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:28.812 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:28.812 12:35:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:29.071 nvme0n1 00:27:29.071 12:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:29.071 12:35:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:29.071 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:29.071 Zero copy mechanism will not be used. 00:27:29.071 Running I/O for 2 seconds... 00:27:31.387 5745.00 IOPS, 718.12 MiB/s [2024-12-10T11:35:53.555Z] 5874.50 IOPS, 734.31 MiB/s 00:27:31.387 Latency(us) 00:27:31.387 [2024-12-10T11:35:53.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.387 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:31.387 nvme0n1 : 2.00 5875.75 734.47 0.00 0.00 2720.45 662.48 6126.19 00:27:31.387 [2024-12-10T11:35:53.555Z] =================================================================================================================== 00:27:31.387 [2024-12-10T11:35:53.555Z] Total : 5875.75 734.47 0.00 0.00 2720.45 662.48 6126.19 00:27:31.387 { 00:27:31.387 "results": [ 00:27:31.387 { 00:27:31.387 "job": "nvme0n1", 00:27:31.387 "core_mask": "0x2", 00:27:31.387 "workload": "randread", 00:27:31.387 "status": "finished", 00:27:31.387 "queue_depth": 16, 00:27:31.387 "io_size": 131072, 00:27:31.387 "runtime": 2.002298, 00:27:31.387 "iops": 5875.748764669395, 00:27:31.387 "mibps": 734.4685955836744, 00:27:31.387 "io_failed": 0, 00:27:31.387 "io_timeout": 0, 00:27:31.387 "avg_latency_us": 2720.4489306897763, 00:27:31.387 "min_latency_us": 662.4834782608696, 00:27:31.387 "max_latency_us": 6126.191304347826 00:27:31.387 } 00:27:31.387 ], 00:27:31.387 "core_count": 1 00:27:31.387 } 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:31.387 | select(.opcode=="crc32c") 00:27:31.387 | "\(.module_name) \(.executed)"' 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1775793 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1775793 ']' 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1775793 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1775793 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1775793' 00:27:31.387 killing process with pid 1775793 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1775793 00:27:31.387 Received shutdown signal, test time was about 2.000000 seconds 00:27:31.387 00:27:31.387 Latency(us) 00:27:31.387 [2024-12-10T11:35:53.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.387 [2024-12-10T11:35:53.555Z] =================================================================================================================== 00:27:31.387 [2024-12-10T11:35:53.555Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:31.387 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1775793 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1776475 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1776475 /var/tmp/bperf.sock 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1776475 ']' 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:31.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.647 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:31.647 [2024-12-10 12:35:53.698941] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:31.647 [2024-12-10 12:35:53.698989] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776475 ] 00:27:31.647 [2024-12-10 12:35:53.775640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.906 [2024-12-10 12:35:53.817092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.906 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:31.906 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:31.906 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:31.906 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:31.906 12:35:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:32.165 12:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.165 12:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:32.424 nvme0n1 00:27:32.424 12:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:32.424 12:35:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:32.683 Running I/O for 2 seconds... 00:27:34.557 27745.00 IOPS, 108.38 MiB/s [2024-12-10T11:35:56.725Z] 27802.00 IOPS, 108.60 MiB/s 00:27:34.557 Latency(us) 00:27:34.557 [2024-12-10T11:35:56.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.557 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:34.557 nvme0n1 : 2.00 27809.14 108.63 0.00 0.00 4597.28 1923.34 7123.48 00:27:34.557 [2024-12-10T11:35:56.725Z] =================================================================================================================== 00:27:34.557 [2024-12-10T11:35:56.725Z] Total : 27809.14 108.63 0.00 0.00 4597.28 1923.34 7123.48 00:27:34.557 { 00:27:34.557 "results": [ 00:27:34.557 { 00:27:34.557 "job": "nvme0n1", 00:27:34.557 "core_mask": "0x2", 00:27:34.557 "workload": "randwrite", 00:27:34.557 "status": "finished", 00:27:34.557 "queue_depth": 128, 00:27:34.557 "io_size": 4096, 00:27:34.557 "runtime": 2.004089, 00:27:34.557 "iops": 27809.144204673546, 00:27:34.557 "mibps": 108.62946954950604, 00:27:34.557 "io_failed": 0, 00:27:34.557 "io_timeout": 0, 00:27:34.557 "avg_latency_us": 4597.283531434598, 00:27:34.557 "min_latency_us": 1923.3391304347826, 00:27:34.557 "max_latency_us": 7123.478260869565 00:27:34.557 } 00:27:34.557 ], 00:27:34.557 "core_count": 1 00:27:34.557 } 00:27:34.557 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:34.557 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:34.557 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:34.557 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:34.557 | select(.opcode=="crc32c") 00:27:34.557 | "\(.module_name) \(.executed)"' 00:27:34.557 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1776475 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1776475 ']' 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1776475 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1776475 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1776475' 00:27:34.817 killing process with pid 1776475 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1776475 00:27:34.817 Received shutdown signal, test time was about 2.000000 seconds 00:27:34.817 00:27:34.817 Latency(us) 00:27:34.817 [2024-12-10T11:35:56.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.817 [2024-12-10T11:35:56.985Z] =================================================================================================================== 00:27:34.817 [2024-12-10T11:35:56.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:34.817 12:35:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1776475 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1776958 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1776958 /var/tmp/bperf.sock 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1776958 ']' 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:35.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:35.076 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:35.076 [2024-12-10 12:35:57.131761] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:35.077 [2024-12-10 12:35:57.131813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776958 ] 00:27:35.077 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:35.077 Zero copy mechanism will not be used. 00:27:35.077 [2024-12-10 12:35:57.207643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.336 [2024-12-10 12:35:57.248036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.336 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:35.336 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:35.336 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:35.336 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:35.336 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:35.595 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.595 12:35:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:35.854 nvme0n1 00:27:36.114 12:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:36.114 12:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:36.114 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:36.114 Zero copy mechanism will not be used. 00:27:36.114 Running I/O for 2 seconds... 00:27:37.992 6229.00 IOPS, 778.62 MiB/s [2024-12-10T11:36:00.160Z] 6302.50 IOPS, 787.81 MiB/s 00:27:37.992 Latency(us) 00:27:37.992 [2024-12-10T11:36:00.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.992 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:37.992 nvme0n1 : 2.00 6300.81 787.60 0.00 0.00 2535.13 1894.85 5755.77 00:27:37.992 [2024-12-10T11:36:00.160Z] =================================================================================================================== 00:27:37.992 [2024-12-10T11:36:00.160Z] Total : 6300.81 787.60 0.00 0.00 2535.13 1894.85 5755.77 00:27:37.992 { 00:27:37.992 "results": [ 00:27:37.992 { 00:27:37.992 "job": "nvme0n1", 00:27:37.992 "core_mask": "0x2", 00:27:37.992 "workload": "randwrite", 00:27:37.992 "status": "finished", 00:27:37.992 "queue_depth": 16, 00:27:37.992 "io_size": 131072, 00:27:37.992 "runtime": 2.002918, 00:27:37.992 "iops": 6300.807122408406, 00:27:37.992 "mibps": 787.6008903010508, 00:27:37.992 "io_failed": 0, 00:27:37.992 "io_timeout": 0, 00:27:37.992 "avg_latency_us": 2535.1256831805968, 00:27:37.992 "min_latency_us": 1894.8452173913045, 00:27:37.992 "max_latency_us": 5755.770434782608 00:27:37.992 } 00:27:37.992 ], 00:27:37.992 "core_count": 1 00:27:37.992 } 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:38.251 | select(.opcode=="crc32c") 00:27:38.251 | "\(.module_name) \(.executed)"' 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1776958 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1776958 ']' 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1776958 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.251 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1776958 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1776958' 00:27:38.511 killing process with pid 1776958 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1776958 00:27:38.511 Received shutdown signal, test time was about 2.000000 seconds 00:27:38.511 00:27:38.511 Latency(us) 00:27:38.511 [2024-12-10T11:36:00.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.511 [2024-12-10T11:36:00.679Z] =================================================================================================================== 00:27:38.511 [2024-12-10T11:36:00.679Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1776958 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1775294 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1775294 ']' 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1775294 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1775294 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1775294' 00:27:38.511 killing process with pid 1775294 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1775294 00:27:38.511 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1775294 00:27:38.770 00:27:38.770 real 0m14.333s 00:27:38.770 user 0m27.551s 00:27:38.770 sys 0m4.602s 00:27:38.770 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.770 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.770 ************************************ 00:27:38.770 END TEST nvmf_digest_clean 00:27:38.770 ************************************ 00:27:38.770 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:38.770 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:38.770 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.770 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.770 ************************************ 00:27:38.770 START TEST nvmf_digest_error 00:27:38.770 ************************************ 00:27:38.770 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:38.770 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:38.770 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:38.770 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:38.771 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:38.771 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1777730 00:27:38.771 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1777730 00:27:38.771 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:38.771 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1777730 ']' 00:27:38.771 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.771 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:38.771 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.771 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:38.771 12:36:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.030 [2024-12-10 12:36:00.940253] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:39.030 [2024-12-10 12:36:00.940295] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.030 [2024-12-10 12:36:01.020826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.030 [2024-12-10 12:36:01.060356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.030 [2024-12-10 12:36:01.060391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.030 [2024-12-10 12:36:01.060398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.030 [2024-12-10 12:36:01.060404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.030 [2024-12-10 12:36:01.060409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.030 [2024-12-10 12:36:01.060934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.030 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.030 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:39.030 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:39.030 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:39.030 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.031 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.031 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:39.031 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.031 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.031 [2024-12-10 12:36:01.125380] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:39.031 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.031 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:39.031 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:39.031 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.031 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.289 null0 00:27:39.290 [2024-12-10 12:36:01.221818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.290 [2024-12-10 12:36:01.246005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1777782 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1777782 /var/tmp/bperf.sock 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1777782 ']' 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:39.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.290 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.290 [2024-12-10 12:36:01.299787] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:39.290 [2024-12-10 12:36:01.299832] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777782 ] 00:27:39.290 [2024-12-10 12:36:01.375957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.290 [2024-12-10 12:36:01.417083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.549 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.549 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:39.549 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:39.549 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:39.808 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:39.808 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.808 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:39.808 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.808 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:39.808 12:36:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.066 nvme0n1 00:27:40.066 12:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:40.066 12:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.066 12:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:40.066 12:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.066 12:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:40.066 12:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:40.066 Running I/O for 2 seconds... 00:27:40.066 [2024-12-10 12:36:02.229087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.066 [2024-12-10 12:36:02.229119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.066 [2024-12-10 12:36:02.229134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.242267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.242293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.242303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.254707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.254729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.254737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.266775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.266796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.266804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.275594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.275615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.275624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.284972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.284993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.285002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.297284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.297305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.297313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.309591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.309613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.309620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.322390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.322413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.322422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.331816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.331840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.331848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.339665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.339686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.339694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.350472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.350492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.350501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.361586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.361608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.361616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.371304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.371325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.371334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.381019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.381041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.381049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.389682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.389703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.389711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.400261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.400282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.400290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.410138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.410166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.410180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.419677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.419699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.419708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.428503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.428524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.428532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.440689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.440711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.440719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.451357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.451378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.451387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.460000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.460022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.460031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.471805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.471827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.471836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.326 [2024-12-10 12:36:02.484416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.326 [2024-12-10 12:36:02.484438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.326 [2024-12-10 12:36:02.484446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.493044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.493066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.493075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.505191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.505217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.505226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.517978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.518000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.518008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.531094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.531115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.531124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.542415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.542436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.542445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.550950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.550971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.550980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.562200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.562223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.562231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.571687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.571708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.571716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.582310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.582331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.582340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.589951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.589971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.589980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.600479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.600499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.600508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.611919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.611940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.611948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.622110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.622131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.622139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.633018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.633039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.633047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.642460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.642480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.642489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.652213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.652234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.652243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.660690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.660711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.660719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.673395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.673416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.673424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.685169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.685190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.685202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.693504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.587 [2024-12-10 12:36:02.693525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.587 [2024-12-10 12:36:02.693533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.587 [2024-12-10 12:36:02.704555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.588 [2024-12-10 12:36:02.704576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.588 [2024-12-10 12:36:02.704585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.588 [2024-12-10 12:36:02.714517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.588 [2024-12-10 12:36:02.714538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.588 [2024-12-10 12:36:02.714547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.588 [2024-12-10 12:36:02.723434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.588 [2024-12-10 12:36:02.723455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.588 [2024-12-10 12:36:02.723463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.588 [2024-12-10 12:36:02.731894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.588 [2024-12-10 12:36:02.731915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.588 [2024-12-10 12:36:02.731923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.588 [2024-12-10 12:36:02.742663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.588 [2024-12-10 12:36:02.742684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.588 [2024-12-10 12:36:02.742692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.588 [2024-12-10 12:36:02.751465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.588 [2024-12-10 12:36:02.751488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.588 [2024-12-10 12:36:02.751497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.847 [2024-12-10 12:36:02.764135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.847 [2024-12-10 12:36:02.764162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.847 [2024-12-10 12:36:02.764171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.847 [2024-12-10 12:36:02.773387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.847 [2024-12-10 12:36:02.773412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.847 [2024-12-10 12:36:02.773420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.847 [2024-12-10 12:36:02.782492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.847 [2024-12-10 12:36:02.782512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.847 [2024-12-10 12:36:02.782521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.847 [2024-12-10 12:36:02.791860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.847 [2024-12-10 12:36:02.791880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.847 [2024-12-10 12:36:02.791888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.802110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.802131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.802139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.810796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.810817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.810826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.819567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.819587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.819596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.830310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.830331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.830339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.839649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.839671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.839679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.850196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.850217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.850226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.858986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.859006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.859015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.869546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.869566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.869574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.879204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.879225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.879234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.887993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.888014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.888022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.897750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.897771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.897779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.908095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.908116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.908124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.918194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.918215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.918224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.929261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.929282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.929290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.939222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.939244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.939255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.948388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.948409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.948417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.960534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.960555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.960563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.970125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.970147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.970155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.979036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.979058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.979067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.988441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.988462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.988470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:02.999525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:02.999546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:02.999554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:40.848 [2024-12-10 12:36:03.012364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:40.848 [2024-12-10 12:36:03.012387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:40.848 [2024-12-10 12:36:03.012398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.022910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.022932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.022940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.031550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.031571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.031579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.041543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.041566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.041574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.050979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.051001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.051011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.061176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.061199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.061207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.071156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.071183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.071192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.079461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.079482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.079490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.088862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.088883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.088891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.100328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.100349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.100357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.109265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.109287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.109298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.119422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.119443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.119451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.128863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.128884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.128893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.138751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.138772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.138780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.149449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.149470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.149478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.159411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.159431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.159440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.168500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.168522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.168531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.180927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.180961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.180971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.189685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.189707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.189715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.200173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.200199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.200207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.210017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.210039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.210049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 24962.00 IOPS, 97.51 MiB/s [2024-12-10T11:36:03.277Z] [2024-12-10 12:36:03.219958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.109 [2024-12-10 12:36:03.219978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.109 [2024-12-10 12:36:03.219987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.109 [2024-12-10 12:36:03.229593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.110 [2024-12-10 12:36:03.229614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.110 [2024-12-10 12:36:03.229623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.110 [2024-12-10 12:36:03.238511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.110 [2024-12-10 12:36:03.238532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.110 [2024-12-10 12:36:03.238540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.110 [2024-12-10 12:36:03.247854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.110 [2024-12-10 12:36:03.247876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.110 [2024-12-10 12:36:03.247884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.110 [2024-12-10 12:36:03.258515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.110 [2024-12-10 12:36:03.258536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.110 [2024-12-10 12:36:03.258545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.110 [2024-12-10 12:36:03.267931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.110 [2024-12-10 12:36:03.267953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.110 [2024-12-10 12:36:03.267962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.277777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.277800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.277809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.286902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.286924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.286932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.297407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.297429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.297438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.307454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.307476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.307484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.317693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.317714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.317722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.326111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.326133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.326142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.336373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.336394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.336403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.345782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.345803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.345811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.356522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.356545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.356553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.365427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.365453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.365461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.375348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.375370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.375378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.386252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.386274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.386282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.397300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.397323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.397331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.406225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.406247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.406256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.417562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.417583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.417592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.426474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.426495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.426504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.439024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.439046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.439054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.448345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.448366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.448375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.458430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.458453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.458462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.467351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.467373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.467382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.479449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.479471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.479479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.490016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.490037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.490045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.370 [2024-12-10 12:36:03.498979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.370 [2024-12-10 12:36:03.499000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.370 [2024-12-10 12:36:03.499008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.371 [2024-12-10 12:36:03.508492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.371 [2024-12-10 12:36:03.508513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.371 [2024-12-10 12:36:03.508522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.371 [2024-12-10 12:36:03.516919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.371 [2024-12-10 12:36:03.516940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.371 [2024-12-10 12:36:03.516948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.371 [2024-12-10 12:36:03.528029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.371 [2024-12-10 12:36:03.528051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.371 [2024-12-10 12:36:03.528060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.540609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.540632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.540644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.551627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.551650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.551658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.565363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.565384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.565393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.576771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.576791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.576800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.584763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.584784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.584793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.596213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.596235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.596243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.604587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.604608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.604617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.616624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.616645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.616653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.629163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.629185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.629194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.637479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.637504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.637513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.647610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.647631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.647640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.659859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.659881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.659889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.671610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.671631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.671639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.682940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.682960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.682969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.692023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.692043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.692051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.700477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.700498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.700506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.710466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.710487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.710495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.721594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.721614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.721622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.730040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.730061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.730069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.741884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.741906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.741914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.754768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.754788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.754797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.767695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.767717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.767725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.776508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.631 [2024-12-10 12:36:03.776529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.631 [2024-12-10 12:36:03.776537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.631 [2024-12-10 12:36:03.788739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.632 [2024-12-10 12:36:03.788759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.632 [2024-12-10 12:36:03.788768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.805091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.805113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.805121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.815198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.815219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.815228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.823780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.823803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.823812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.836492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.836514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.836522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.848076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.848097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.848105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.857079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.857100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.857108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.869147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.869173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.869182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.881010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.881030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.881039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.893588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.893609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.893617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.902735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.902756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.902764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.913818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.913838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.913847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.923027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.923048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.923056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.933244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.933264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.933272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.943743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.943764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.943773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.953327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.953348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.953357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.961945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.961965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.961974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.972354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.972375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.972383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.981323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.981344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.981352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.991339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.991359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.991368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:03.999503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:03.999523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:03.999535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:04.009397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:04.009417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.892 [2024-12-10 12:36:04.009426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.892 [2024-12-10 12:36:04.019290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.892 [2024-12-10 12:36:04.019311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.893 [2024-12-10 12:36:04.019320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.893 [2024-12-10 12:36:04.029656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.893 [2024-12-10 12:36:04.029676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.893 [2024-12-10 12:36:04.029685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.893 [2024-12-10 12:36:04.038360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.893 [2024-12-10 12:36:04.038381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.893 [2024-12-10 12:36:04.038389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.893 [2024-12-10 12:36:04.048451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.893 [2024-12-10 12:36:04.048471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.893 [2024-12-10 12:36:04.048480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:41.893 [2024-12-10 12:36:04.056821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:41.893 [2024-12-10 12:36:04.056842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.893 [2024-12-10 12:36:04.056851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.152 [2024-12-10 12:36:04.066797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.152 [2024-12-10 12:36:04.066820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.152 [2024-12-10 12:36:04.066829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.077383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.077405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.077414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.085310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.085337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.085346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.095391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.095412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.095421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.106148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.106174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.106184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.114666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.114687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.114696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.126224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.126244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.126252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.137464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.137484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.137493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.149316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.149336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.149345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.161701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.161722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.161730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.170192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.170213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.170221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.181685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.181705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.181713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.190600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.190622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.190630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.203584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.203605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.203613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 [2024-12-10 12:36:04.216362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca2440) 00:27:42.153 [2024-12-10 12:36:04.216383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:42.153 [2024-12-10 12:36:04.216392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:42.153 24796.00 IOPS, 96.86 MiB/s 00:27:42.153 Latency(us) 00:27:42.153 [2024-12-10T11:36:04.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.153 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:42.153 nvme0n1 : 2.04 24334.97 95.06 0.00 0.00 5155.02 2478.97 45590.26 00:27:42.153 [2024-12-10T11:36:04.321Z] =================================================================================================================== 00:27:42.153 [2024-12-10T11:36:04.321Z] Total : 24334.97 95.06 0.00 0.00 5155.02 2478.97 45590.26 00:27:42.153 { 00:27:42.153 "results": [ 00:27:42.153 { 00:27:42.153 "job": "nvme0n1", 00:27:42.153 "core_mask": "0x2", 00:27:42.153 "workload": "randread", 00:27:42.153 "status": "finished", 00:27:42.153 "queue_depth": 128, 00:27:42.153 "io_size": 4096, 00:27:42.153 "runtime": 2.04315, 00:27:42.153 "iops": 24334.972958422044, 00:27:42.153 "mibps": 95.05848811883611, 00:27:42.153 "io_failed": 0, 00:27:42.153 "io_timeout": 0, 00:27:42.153 "avg_latency_us": 5155.020807163594, 00:27:42.153 "min_latency_us": 2478.9704347826087, 00:27:42.153 "max_latency_us": 45590.260869565216 00:27:42.153 } 00:27:42.153 ], 00:27:42.153 "core_count": 1 00:27:42.153 } 00:27:42.153 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:42.153 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:42.153 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:42.153 | .driver_specific 00:27:42.153 | .nvme_error 00:27:42.153 | .status_code 00:27:42.153 | .command_transient_transport_error' 00:27:42.153 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1777782 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1777782 ']' 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1777782 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1777782 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1777782' 00:27:42.413 killing process with pid 1777782 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1777782 00:27:42.413 Received shutdown signal, test time was about 2.000000 seconds 00:27:42.413 00:27:42.413 Latency(us) 00:27:42.413 [2024-12-10T11:36:04.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.413 [2024-12-10T11:36:04.581Z] =================================================================================================================== 00:27:42.413 [2024-12-10T11:36:04.581Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:42.413 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1777782 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1778335 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1778335 /var/tmp/bperf.sock 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1778335 ']' 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:42.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.673 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:42.673 [2024-12-10 12:36:04.746037] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:42.673 [2024-12-10 12:36:04.746087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778335 ] 00:27:42.673 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:42.673 Zero copy mechanism will not be used. 00:27:42.673 [2024-12-10 12:36:04.821672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.947 [2024-12-10 12:36:04.866364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.947 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.947 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:42.947 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:42.947 12:36:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:43.216 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:43.216 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.216 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:43.216 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.216 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:43.216 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:43.508 nvme0n1 00:27:43.508 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:43.508 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.508 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:43.508 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.508 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:43.508 12:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:43.508 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:43.508 Zero copy mechanism will not be used. 00:27:43.508 Running I/O for 2 seconds... 00:27:43.508 [2024-12-10 12:36:05.564301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.508 [2024-12-10 12:36:05.564337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.508 [2024-12-10 12:36:05.564348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.508 [2024-12-10 12:36:05.569636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.508 [2024-12-10 12:36:05.569663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.508 [2024-12-10 12:36:05.569672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.574803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.574826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.574834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.580058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.580085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.580093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.585281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.585304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.585312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.590526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.590549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.590557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.595903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.595928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.595936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.601104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.601127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.601135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.606378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.606403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.606412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.611682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.611706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.611714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.616965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.616987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.616996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.622129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.622151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.622167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.627332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.627355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.627363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.632524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.632547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.632555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.637833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.637856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.637864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.643073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.643096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.643105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.648348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.648372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.648381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.653680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.653703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.653713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.659447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.659472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.659482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.665354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.665377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.665386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.509 [2024-12-10 12:36:05.670629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.509 [2024-12-10 12:36:05.670653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.509 [2024-12-10 12:36:05.670665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.675935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.675963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.675972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.681291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.681316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.681327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.686596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.686620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.686629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.691862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.691886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.691896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.697113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.697139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.697149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.702390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.702414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.702423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.707614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.707638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.707647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.712816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.712839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.712848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.718087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.718114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.718123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.723333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.723356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.723365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.728543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.728565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.728574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.733744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.733766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.733775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.738951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.738973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.738982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.744107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.744129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.814 [2024-12-10 12:36:05.744138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.814 [2024-12-10 12:36:05.749334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.814 [2024-12-10 12:36:05.749356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.749365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.754546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.754569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.754577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.759773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.759795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.759804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.765050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.765073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.765082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.770223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.770245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.770254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.775351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.775372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.775381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.780481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.780504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.780512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.785637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.785659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.785668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.790881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.790903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.790912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.796044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.796066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.796074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.801249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.801271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.801280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.806432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.806458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.806466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.811615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.811638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.811647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.816771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.816794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.816803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.822041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.822063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.822073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.827351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.827374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.827382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.832591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.832614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.832622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.837793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.837821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.837829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.842929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.842952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.842960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.848169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.848191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.848200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.853402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.853425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.853434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.858608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.858630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.858639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.863894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.863917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.863925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.869120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.869141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.869150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.874363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.874386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.874395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.879534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.879556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.879565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.884715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.884736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.884745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.889911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.815 [2024-12-10 12:36:05.889933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.815 [2024-12-10 12:36:05.889941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.815 [2024-12-10 12:36:05.895110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.895132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.895146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.900269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.900292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.900301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.905484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.905506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.905514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.910665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.910687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.910695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.915799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.915821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.915830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.920974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.920996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.921004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.926245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.926267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.926275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.931461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.931484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.931492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.936636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.936659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.936667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.941865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.941891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.941899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.947119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.947141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.947150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.952409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.952433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.952445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.957739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.957763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.957775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.963044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.963067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.963077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.968287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.968310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.968319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:43.816 [2024-12-10 12:36:05.973510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:43.816 [2024-12-10 12:36:05.973533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:43.816 [2024-12-10 12:36:05.973542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.088 [2024-12-10 12:36:05.978827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:05.978854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:05.978864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:05.984171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:05.984193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:05.984202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:05.989504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:05.989528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:05.989537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:05.994718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:05.994740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:05.994749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:05.999977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.000000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.000008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.005284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.005306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.005314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.010546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.010569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.010577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.016229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.016251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.016259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.023011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.023035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.023044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.030441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.030466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.030475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.036800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.036824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.036837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.043174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.043198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.043207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.049075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.049099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.049109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.055629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.055652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.055661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.063006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.063032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.063043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.069922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.069947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.069956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.076990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.077016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.077026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.084846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.084871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.084880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.092194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.092219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.092228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.098368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.098393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.098402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.103788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.103812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.103822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.109081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.109104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.109113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.114334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.114357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.114366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.119688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.119712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.119720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.124947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.124970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.124979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.130316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.130339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.089 [2024-12-10 12:36:06.130348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.089 [2024-12-10 12:36:06.135574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.089 [2024-12-10 12:36:06.135599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.135607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.140871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.140894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.140907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.146211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.146234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.146243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.151541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.151565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.151574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.156893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.156916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.156926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.162296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.162320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.162329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.167681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.167705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.167714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.172971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.172993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.173002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.178282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.178304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.178312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.183586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.183608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.183616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.188887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.188913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.188922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.194207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.194229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.194238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.199447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.199469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.199478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.204831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.204854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.204862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.210166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.210188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.210197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.215485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.215507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.215516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.220795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.220818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.220827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.226123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.226146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.226155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.231486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.231507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.231515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.236806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.236828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.236836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.242136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.242166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.242174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.090 [2024-12-10 12:36:06.247525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.090 [2024-12-10 12:36:06.247549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.090 [2024-12-10 12:36:06.247561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.360 [2024-12-10 12:36:06.252848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.252876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.252886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.258217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.258242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.258252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.263476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.263502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.263514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.268808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.268833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.268844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.274470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.274494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.274503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.280625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.280650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.280664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.286898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.286924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.286933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.294400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.294423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.294432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.301805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.301829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.301838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.309255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.309280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.309288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.316919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.316942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.316951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.324640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.324663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.324673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.332554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.332579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.332588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.340093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.340118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.340126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.347769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.347797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.347806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.355593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.355617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.355626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.363301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.363325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.363334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.370711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.370734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.370744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.378418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.378442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.378451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.386043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.386065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.386075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.393730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.393753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.393762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.397974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.397997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.398006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.403283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.403307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.403316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.408227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.408249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.408257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.413372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.413395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.413404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.418521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.418543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.418553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.423683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.361 [2024-12-10 12:36:06.423705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.361 [2024-12-10 12:36:06.423714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.361 [2024-12-10 12:36:06.429012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.429035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.429043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.434403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.434427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.434436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.439727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.439750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.439758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.445085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.445107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.445116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.450442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.450464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.450476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.455950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.455974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.455982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.461239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.461261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.461269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.467506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.467529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.467539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.473462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.473486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.473494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.478796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.478819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.478828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.484135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.484163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.484172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.489462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.489484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.489492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.494819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.494842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.494850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.500075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.500097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.500106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.505312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.505334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.505343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.510591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.510613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.510621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.515803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.515827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.515835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.362 [2024-12-10 12:36:06.521102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.362 [2024-12-10 12:36:06.521125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.362 [2024-12-10 12:36:06.521133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.526409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.526433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.526442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.531742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.531765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.531774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.536907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.536930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.536938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.542141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.542169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.542182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.547441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.547463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.547472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.552647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.552669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.552677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.557854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.557877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.557885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.622 5524.00 IOPS, 690.50 MiB/s [2024-12-10T11:36:06.790Z] [2024-12-10 12:36:06.564485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.564508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.564516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.569772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.569795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.569803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.575422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.575446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.575455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.581297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.581320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.581330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.588218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.588241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.588249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.594738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.594766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.594775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.602313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.602336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.602345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.609887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.609909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.609918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.616978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.617001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.617011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.624319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.624344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.624353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.631872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.631895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.631904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.639394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.639417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.622 [2024-12-10 12:36:06.639425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.622 [2024-12-10 12:36:06.646690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.622 [2024-12-10 12:36:06.646713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.646722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.654351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.654374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.654383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.661498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.661522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.661531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.668868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.668891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.668901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.676386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.676409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.676418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.684060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.684083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.684092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.691689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.691712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.691720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.699335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.699360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.699370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.707774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.707797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.707806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.715909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.715932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.715941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.723785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.723819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.723834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.732285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.732309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.732318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.740122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.740146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.740155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.748296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.748321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.748330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.756111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.756134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.756143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.762351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.762385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.762394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.768092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.768115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.768124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.774603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.774627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.774636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.780390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.780413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.780421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.623 [2024-12-10 12:36:06.786962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.623 [2024-12-10 12:36:06.786987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.623 [2024-12-10 12:36:06.787001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.883 [2024-12-10 12:36:06.792549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.883 [2024-12-10 12:36:06.792573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.883 [2024-12-10 12:36:06.792582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.883 [2024-12-10 12:36:06.797989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.883 [2024-12-10 12:36:06.798011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.883 [2024-12-10 12:36:06.798020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.883 [2024-12-10 12:36:06.803303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.883 [2024-12-10 12:36:06.803325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.883 [2024-12-10 12:36:06.803333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.883 [2024-12-10 12:36:06.808648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.883 [2024-12-10 12:36:06.808670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.883 [2024-12-10 12:36:06.808679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.883 [2024-12-10 12:36:06.814052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.883 [2024-12-10 12:36:06.814074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.883 [2024-12-10 12:36:06.814082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.883 [2024-12-10 12:36:06.819407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.883 [2024-12-10 12:36:06.819429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.883 [2024-12-10 12:36:06.819438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.883 [2024-12-10 12:36:06.824796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.883 [2024-12-10 12:36:06.824819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.883 [2024-12-10 12:36:06.824827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.830146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.830173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.830186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.835513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.835536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.835546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.840851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.840874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.840883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.846225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.846247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.846255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.851571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.851592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.851600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.856947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.856970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.856989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.862314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.862336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.862345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.867705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.867728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.867736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.873569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.873591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.873599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.878990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.879016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.879025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.884340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.884363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.884371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.889706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.889729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.889737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.895100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.895122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.895130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.900292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.900315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.900323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.905658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.905682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.905690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.910946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.910969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.910978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.916209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.916232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.916241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.921583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.921605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.921613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.926996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.927019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.927027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.932510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.932532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.932540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.937830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.937852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.937860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.943164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.943192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.943200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.948540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.948562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.948571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.953839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.953862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.953870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.959121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.959143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.959152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.964033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.964055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.964064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.969285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.969307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.884 [2024-12-10 12:36:06.969319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.884 [2024-12-10 12:36:06.974687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.884 [2024-12-10 12:36:06.974710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:06.974718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:06.980065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:06.980086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:06.980094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:06.985350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:06.985372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:06.985380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:06.990759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:06.990781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:06.990789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:06.996141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:06.996168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:06.996177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:07.001471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:07.001495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:07.001504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:07.007020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:07.007042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:07.007050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:07.012327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:07.012349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:07.012357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:07.017597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:07.017621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:07.017629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:07.021108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:07.021130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:07.021138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:07.025281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:07.025306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:07.025315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:07.030528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:07.030550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:07.030559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:07.035839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:07.035861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:07.035869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:07.041234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:07.041255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:07.041264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:44.885 [2024-12-10 12:36:07.046685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:44.885 [2024-12-10 12:36:07.046708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:44.885 [2024-12-10 12:36:07.046717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.051949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.051971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.051979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.057272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.057295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.057307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.062587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.062609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.062618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.067879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.067902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.067910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.073072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.073094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.073102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.078314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.078336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.078344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.083618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.083639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.083647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.088999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.089022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.089030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.094396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.094419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.094428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.099699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.099721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.099730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.104944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.104971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.104980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.110675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.110698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.110707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.116373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.116396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.116405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.145 [2024-12-10 12:36:07.122034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.145 [2024-12-10 12:36:07.122057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.145 [2024-12-10 12:36:07.122065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.127677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.127699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.127707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.132684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.132707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.132716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.138035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.138058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.138065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.143648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.143669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.143678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.149246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.149268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.149276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.154924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.154947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.154956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.160621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.160644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.160652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.166273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.166296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.166305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.171726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.171749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.171757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.177186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.177207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.177216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.183268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.183289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.183298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.188712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.188734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.188742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.194031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.194053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.194061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.199380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.199401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.199414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.205036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.205059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.205068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.210218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.210240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.210249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.215705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.215727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.215735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.220908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.220930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.220938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.226557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.226580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.226589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.232185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.232207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.232216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.237546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.237569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.237578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.243022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.243043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.243052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.248399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.248428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.248437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.253746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.253768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.253777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.259343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.259365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.259373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.264739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.264761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.264769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.270124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.270146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.146 [2024-12-10 12:36:07.270154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.146 [2024-12-10 12:36:07.275739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.146 [2024-12-10 12:36:07.275761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.147 [2024-12-10 12:36:07.275769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.147 [2024-12-10 12:36:07.281216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.147 [2024-12-10 12:36:07.281238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.147 [2024-12-10 12:36:07.281247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.147 [2024-12-10 12:36:07.286904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.147 [2024-12-10 12:36:07.286926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.147 [2024-12-10 12:36:07.286934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.147 [2024-12-10 12:36:07.292560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.147 [2024-12-10 12:36:07.292582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.147 [2024-12-10 12:36:07.292590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.147 [2024-12-10 12:36:07.297938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.147 [2024-12-10 12:36:07.297961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.147 [2024-12-10 12:36:07.297969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.147 [2024-12-10 12:36:07.303265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.147 [2024-12-10 12:36:07.303287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.147 [2024-12-10 12:36:07.303295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.147 [2024-12-10 12:36:07.308833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.147 [2024-12-10 12:36:07.308867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.147 [2024-12-10 12:36:07.308877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.314262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.314284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.314293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.319701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.319724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.319733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.325600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.325624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.325632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.331003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.331026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.331034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.336339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.336362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.336370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.341794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.341818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.341831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.347250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.347273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.347282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.352468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.352491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.352500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.357740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.357762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.357770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.362951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.362973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.362981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.368281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.368304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.368313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.373712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.373734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.373742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.379255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.379277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.379285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.385036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.385058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.385067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.390650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.390672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.390681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.395971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.395995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.396003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.401367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.401388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.401397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.407278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.407 [2024-12-10 12:36:07.407301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.407 [2024-12-10 12:36:07.407310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.407 [2024-12-10 12:36:07.412697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.412720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.412728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.418131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.418154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.418168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.423712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.423735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.423743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.429075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.429097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.429105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.434382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.434405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.434416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.439739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.439762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.439770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.445306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.445329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.445338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.451023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.451046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.451054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.456753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.456779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.456787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.462366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.462389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.462398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.467904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.467927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.467935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.473301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.473323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.473331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.478787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.478812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.478822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.484832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.484860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.484870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.491665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.491689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.491698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.497424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.497447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.497457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.500327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.500350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.500358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.505874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.505898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.505907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.511850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.511873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.511882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.517434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.517458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.517467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.522901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.522923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.522932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.528565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.528587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.528595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.534199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.534222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.534230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.539997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.540019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.540027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.545422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.545445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.545453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.551052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.551076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.551084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.556641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.556664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.556673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:45.408 [2024-12-10 12:36:07.562633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12d40c0) 00:27:45.408 [2024-12-10 12:36:07.562657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:45.408 [2024-12-10 12:36:07.562666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:45.408 5472.00 IOPS, 684.00 MiB/s 00:27:45.409 Latency(us) 00:27:45.409 [2024-12-10T11:36:07.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.409 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:45.409 nvme0n1 : 2.00 5469.77 683.72 0.00 0.00 2922.30 918.93 9289.02 00:27:45.409 [2024-12-10T11:36:07.577Z] =================================================================================================================== 00:27:45.409 [2024-12-10T11:36:07.577Z] Total : 5469.77 683.72 0.00 0.00 2922.30 918.93 9289.02 00:27:45.409 { 00:27:45.409 "results": [ 00:27:45.409 { 00:27:45.409 "job": "nvme0n1", 00:27:45.409 "core_mask": "0x2", 00:27:45.409 "workload": "randread", 00:27:45.409 "status": "finished", 00:27:45.409 "queue_depth": 16, 00:27:45.409 "io_size": 131072, 00:27:45.409 "runtime": 2.003739, 00:27:45.409 "iops": 5469.774257026489, 00:27:45.409 "mibps": 683.7217821283111, 00:27:45.409 "io_failed": 0, 00:27:45.409 "io_timeout": 0, 00:27:45.409 "avg_latency_us": 2922.2980894953985, 00:27:45.409 "min_latency_us": 918.9286956521739, 00:27:45.409 "max_latency_us": 9289.015652173914 00:27:45.409 } 00:27:45.409 ], 00:27:45.409 "core_count": 1 00:27:45.409 } 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:45.668 | .driver_specific 00:27:45.668 | .nvme_error 00:27:45.668 | .status_code 00:27:45.668 | .command_transient_transport_error' 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 354 > 0 )) 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1778335 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1778335 ']' 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1778335 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.668 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1778335 00:27:45.927 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:45.927 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:45.927 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1778335' 00:27:45.927 killing process with pid 1778335 00:27:45.927 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1778335 00:27:45.927 Received shutdown signal, test time was about 2.000000 seconds 00:27:45.927 00:27:45.927 Latency(us) 00:27:45.927 [2024-12-10T11:36:08.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.927 [2024-12-10T11:36:08.095Z] =================================================================================================================== 00:27:45.927 [2024-12-10T11:36:08.095Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:45.927 12:36:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1778335 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1779383 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1779383 /var/tmp/bperf.sock 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1779383 ']' 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:45.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.927 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:45.927 [2024-12-10 12:36:08.050148] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:45.927 [2024-12-10 12:36:08.050200] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779383 ] 00:27:46.186 [2024-12-10 12:36:08.125407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.186 [2024-12-10 12:36:08.166440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.186 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:46.186 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:46.186 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:46.186 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:46.445 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:46.445 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.445 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.445 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.445 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.445 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.704 nvme0n1 00:27:46.704 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:46.704 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.704 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:46.963 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.963 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:46.963 12:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.963 Running I/O for 2 seconds... 00:27:46.963 [2024-12-10 12:36:08.978503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee1f80 00:27:46.963 [2024-12-10 12:36:08.979423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.963 [2024-12-10 12:36:08.979453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.963 [2024-12-10 12:36:08.988049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efe720 00:27:46.963 [2024-12-10 12:36:08.988752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.963 [2024-12-10 12:36:08.988776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:46.963 [2024-12-10 12:36:08.998247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eea248 00:27:46.963 [2024-12-10 12:36:08.999550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.963 [2024-12-10 12:36:08.999570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:46.963 [2024-12-10 12:36:09.004921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edf118 00:27:46.963 [2024-12-10 12:36:09.005484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.963 [2024-12-10 12:36:09.005504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:46.963 [2024-12-10 12:36:09.015656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efd640 00:27:46.963 [2024-12-10 12:36:09.016541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.963 [2024-12-10 12:36:09.016561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:46.963 [2024-12-10 12:36:09.024497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee5658 00:27:46.963 [2024-12-10 12:36:09.025369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.963 [2024-12-10 12:36:09.025390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:46.963 [2024-12-10 12:36:09.034825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eea248 00:27:46.963 [2024-12-10 12:36:09.035880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.963 [2024-12-10 12:36:09.035901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.963 [2024-12-10 12:36:09.044345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef4b08 00:27:46.963 [2024-12-10 12:36:09.045494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.963 [2024-12-10 12:36:09.045514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:46.963 [2024-12-10 12:36:09.053574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef46d0 00:27:46.963 [2024-12-10 12:36:09.054825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.963 [2024-12-10 12:36:09.054845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:46.964 [2024-12-10 12:36:09.063218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef2d80 00:27:46.964 [2024-12-10 12:36:09.064506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.964 [2024-12-10 12:36:09.064526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:46.964 [2024-12-10 12:36:09.071165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efd640 00:27:46.964 [2024-12-10 12:36:09.071957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.964 [2024-12-10 12:36:09.071981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:46.964 [2024-12-10 12:36:09.081502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eddc00 00:27:46.964 [2024-12-10 12:36:09.082859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.964 [2024-12-10 12:36:09.082880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:46.964 [2024-12-10 12:36:09.090036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edece0 00:27:46.964 [2024-12-10 12:36:09.091048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.964 [2024-12-10 12:36:09.091068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:46.964 [2024-12-10 12:36:09.099301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef8e88 00:27:46.964 [2024-12-10 12:36:09.100080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.964 [2024-12-10 12:36:09.100100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:46.964 [2024-12-10 12:36:09.108784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7970 00:27:46.964 [2024-12-10 12:36:09.109966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.964 [2024-12-10 12:36:09.109985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:46.964 [2024-12-10 12:36:09.117392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee0630 00:27:46.964 [2024-12-10 12:36:09.118488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.964 [2024-12-10 12:36:09.118509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:46.964 [2024-12-10 12:36:09.126761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eee190 00:27:46.964 [2024-12-10 12:36:09.127912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:46.964 [2024-12-10 12:36:09.127934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:47.223 [2024-12-10 12:36:09.136090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ede8a8 00:27:47.223 [2024-12-10 12:36:09.137419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.223 [2024-12-10 12:36:09.137440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.223 [2024-12-10 12:36:09.146442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeaef0 00:27:47.223 [2024-12-10 12:36:09.147840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.223 [2024-12-10 12:36:09.147859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:47.223 [2024-12-10 12:36:09.155188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efb048 00:27:47.223 [2024-12-10 12:36:09.156280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.223 [2024-12-10 12:36:09.156301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:47.223 [2024-12-10 12:36:09.164514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eddc00 00:27:47.223 [2024-12-10 12:36:09.165589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.223 [2024-12-10 12:36:09.165608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:47.223 [2024-12-10 12:36:09.173572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee8088 00:27:47.223 [2024-12-10 12:36:09.174494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.223 [2024-12-10 12:36:09.174513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:47.223 [2024-12-10 12:36:09.182638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee8088 00:27:47.223 [2024-12-10 12:36:09.183592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.223 [2024-12-10 12:36:09.183612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:47.223 [2024-12-10 12:36:09.191787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee8088 00:27:47.223 [2024-12-10 12:36:09.192747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.223 [2024-12-10 12:36:09.192766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:47.223 [2024-12-10 12:36:09.200943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee8088 00:27:47.223 [2024-12-10 12:36:09.201878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.223 [2024-12-10 12:36:09.201898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:47.223 [2024-12-10 12:36:09.210131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee8088 00:27:47.223 [2024-12-10 12:36:09.210955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.223 [2024-12-10 12:36:09.210974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.220388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eea680 00:27:47.224 [2024-12-10 12:36:09.221678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.221698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.226958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efe2e8 00:27:47.224 [2024-12-10 12:36:09.227606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.227626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.236996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeee38 00:27:47.224 [2024-12-10 12:36:09.237455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.237476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.246608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eec840 00:27:47.224 [2024-12-10 12:36:09.247414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.247434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.255217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeaab8 00:27:47.224 [2024-12-10 12:36:09.255879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.255899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.264854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee3498 00:27:47.224 [2024-12-10 12:36:09.265644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.265664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.274494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef8e88 00:27:47.224 [2024-12-10 12:36:09.275495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.275515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.284520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7100 00:27:47.224 [2024-12-10 12:36:09.285309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.285329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.292990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef5be8 00:27:47.224 [2024-12-10 12:36:09.293802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.293822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.303731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee5ec8 00:27:47.224 [2024-12-10 12:36:09.305146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.305170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.310338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee6738 00:27:47.224 [2024-12-10 12:36:09.310994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.311020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.320284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eef270 00:27:47.224 [2024-12-10 12:36:09.320814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.320835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.330784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef6458 00:27:47.224 [2024-12-10 12:36:09.332009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.332029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.339602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee95a0 00:27:47.224 [2024-12-10 12:36:09.340812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.340831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.348120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee5ec8 00:27:47.224 [2024-12-10 12:36:09.348962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.348981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.357144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee9e10 00:27:47.224 [2024-12-10 12:36:09.358000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.358020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.366297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef3e60 00:27:47.224 [2024-12-10 12:36:09.367169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.367189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.375486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeea00 00:27:47.224 [2024-12-10 12:36:09.376357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.376376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:47.224 [2024-12-10 12:36:09.384747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef0350 00:27:47.224 [2024-12-10 12:36:09.385625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.224 [2024-12-10 12:36:09.385645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:47.483 [2024-12-10 12:36:09.394070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee3d08 00:27:47.483 [2024-12-10 12:36:09.394921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.483 [2024-12-10 12:36:09.394941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:47.483 [2024-12-10 12:36:09.403538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee2c28 00:27:47.483 [2024-12-10 12:36:09.404191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.404212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.414081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edf118 00:27:47.484 [2024-12-10 12:36:09.415558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.415577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.422632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ede8a8 00:27:47.484 [2024-12-10 12:36:09.423713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.423734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.431020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeaef0 00:27:47.484 [2024-12-10 12:36:09.432339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.432358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.439507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef92c0 00:27:47.484 [2024-12-10 12:36:09.440253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.440273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.448954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eec840 00:27:47.484 [2024-12-10 12:36:09.449836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.449856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.458150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efa3a0 00:27:47.484 [2024-12-10 12:36:09.458681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.458702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.468761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eddc00 00:27:47.484 [2024-12-10 12:36:09.470075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.470094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.477340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efef90 00:27:47.484 [2024-12-10 12:36:09.478336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.478355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.486434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efe2e8 00:27:47.484 [2024-12-10 12:36:09.487434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.487454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.495825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edf550 00:27:47.484 [2024-12-10 12:36:09.496842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.496862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.505087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef0bc0 00:27:47.484 [2024-12-10 12:36:09.506052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.506071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.515466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeb328 00:27:47.484 [2024-12-10 12:36:09.516896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.516915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.523987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7100 00:27:47.484 [2024-12-10 12:36:09.525074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.525093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.533078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef20d8 00:27:47.484 [2024-12-10 12:36:09.534166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.534185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.542269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edfdc0 00:27:47.484 [2024-12-10 12:36:09.543357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.543376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.551474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7970 00:27:47.484 [2024-12-10 12:36:09.552560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.552579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.560644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeea00 00:27:47.484 [2024-12-10 12:36:09.561753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.561772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.569803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee99d8 00:27:47.484 [2024-12-10 12:36:09.570884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.570902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.578991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee49b0 00:27:47.484 [2024-12-10 12:36:09.580072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.580091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.588163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee38d0 00:27:47.484 [2024-12-10 12:36:09.589255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.589274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.597317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef0788 00:27:47.484 [2024-12-10 12:36:09.598417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.598436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.606530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eed0b0 00:27:47.484 [2024-12-10 12:36:09.607613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.607632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.615691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee88f8 00:27:47.484 [2024-12-10 12:36:09.616791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.616810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.624870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee8088 00:27:47.484 [2024-12-10 12:36:09.625960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.625979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.634061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee6738 00:27:47.484 [2024-12-10 12:36:09.635148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.635174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.484 [2024-12-10 12:36:09.643249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef1868 00:27:47.484 [2024-12-10 12:36:09.644357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.484 [2024-12-10 12:36:09.644376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.651945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeaab8 00:27:47.744 [2024-12-10 12:36:09.653356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.653377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.660505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7100 00:27:47.744 [2024-12-10 12:36:09.661242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.661263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.669638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef20d8 00:27:47.744 [2024-12-10 12:36:09.670401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.670421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.678850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edfdc0 00:27:47.744 [2024-12-10 12:36:09.679607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.679627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.688038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee8d30 00:27:47.744 [2024-12-10 12:36:09.688770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.688790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.697221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef8618 00:27:47.744 [2024-12-10 12:36:09.697973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.697993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.706422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee5220 00:27:47.744 [2024-12-10 12:36:09.707170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.707189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.715573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ede8a8 00:27:47.744 [2024-12-10 12:36:09.716340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.716359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.724890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef3a28 00:27:47.744 [2024-12-10 12:36:09.725642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.725662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.734091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee0630 00:27:47.744 [2024-12-10 12:36:09.734848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.734867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.743273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef4298 00:27:47.744 [2024-12-10 12:36:09.744046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.744065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.752729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef5378 00:27:47.744 [2024-12-10 12:36:09.753499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.753519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.761999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef6458 00:27:47.744 [2024-12-10 12:36:09.762752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.762771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.771200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee1710 00:27:47.744 [2024-12-10 12:36:09.771961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.771981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.780404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef31b8 00:27:47.744 [2024-12-10 12:36:09.781141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.781166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.790072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee5220 00:27:47.744 [2024-12-10 12:36:09.791043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.791062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.801323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee3060 00:27:47.744 [2024-12-10 12:36:09.802781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.744 [2024-12-10 12:36:09.802800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:47.744 [2024-12-10 12:36:09.810939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7538 00:27:47.745 [2024-12-10 12:36:09.812468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.812487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:47.745 [2024-12-10 12:36:09.817412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efb480 00:27:47.745 [2024-12-10 12:36:09.818070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.818091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:47.745 [2024-12-10 12:36:09.827849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eddc00 00:27:47.745 [2024-12-10 12:36:09.828986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.829004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:47.745 [2024-12-10 12:36:09.837480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7970 00:27:47.745 [2024-12-10 12:36:09.838811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.838830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:47.745 [2024-12-10 12:36:09.846784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efe2e8 00:27:47.745 [2024-12-10 12:36:09.848120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.848139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:47.745 [2024-12-10 12:36:09.856704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef4b08 00:27:47.745 [2024-12-10 12:36:09.858317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.858338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.745 [2024-12-10 12:36:09.863443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efc560 00:27:47.745 [2024-12-10 12:36:09.864313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.864332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:47.745 [2024-12-10 12:36:09.874647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeee38 00:27:47.745 [2024-12-10 12:36:09.875900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.875923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:47.745 [2024-12-10 12:36:09.883401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efa3a0 00:27:47.745 [2024-12-10 12:36:09.884520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.884539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:47.745 [2024-12-10 12:36:09.892949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef81e0 00:27:47.745 [2024-12-10 12:36:09.894179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.894199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:47.745 [2024-12-10 12:36:09.900460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eec840 00:27:47.745 [2024-12-10 12:36:09.901110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:47.745 [2024-12-10 12:36:09.901129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:09.910340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efc128 00:27:48.005 [2024-12-10 12:36:09.911338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:09.911358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:09.919695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eefae0 00:27:48.005 [2024-12-10 12:36:09.920239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:09.920259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:09.928637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee6fa8 00:27:48.005 [2024-12-10 12:36:09.929480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:09.929500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:09.937886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eea680 00:27:48.005 [2024-12-10 12:36:09.938657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:09.938677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:09.947466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efda78 00:27:48.005 [2024-12-10 12:36:09.948353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:09.948372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:09.957514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eebb98 00:27:48.005 [2024-12-10 12:36:09.958632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:09.958651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:09.967117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee27f0 00:27:48.005 [2024-12-10 12:36:09.968354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:09.968373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:48.005 27494.00 IOPS, 107.40 MiB/s [2024-12-10T11:36:10.173Z] [2024-12-10 12:36:09.977906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efa3a0 00:27:48.005 [2024-12-10 12:36:09.978584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:09.978604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:09.986868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef8618 00:27:48.005 [2024-12-10 12:36:09.987841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:09.987860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:09.996035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef6cc8 00:27:48.005 [2024-12-10 12:36:09.996957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:09.996977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.005137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee3060 00:27:48.005 [2024-12-10 12:36:10.006048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.006068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.014766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efd208 00:27:48.005 [2024-12-10 12:36:10.015291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.015312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.024619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef5be8 00:27:48.005 [2024-12-10 12:36:10.025225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.025247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.034725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7100 00:27:48.005 [2024-12-10 12:36:10.035424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.035443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.043722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee9e10 00:27:48.005 [2024-12-10 12:36:10.044721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.044741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.053144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee7818 00:27:48.005 [2024-12-10 12:36:10.054169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.054189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.063330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eee5c8 00:27:48.005 [2024-12-10 12:36:10.064529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.064549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.073016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef46d0 00:27:48.005 [2024-12-10 12:36:10.074284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.074303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.082416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eef270 00:27:48.005 [2024-12-10 12:36:10.083702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.083721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.089284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee6300 00:27:48.005 [2024-12-10 12:36:10.089962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.089980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.100747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee73e0 00:27:48.005 [2024-12-10 12:36:10.101914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.101933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.110390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eea248 00:27:48.005 [2024-12-10 12:36:10.111695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.111714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.119078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef3a28 00:27:48.005 [2024-12-10 12:36:10.120118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.120142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.128265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee5a90 00:27:48.005 [2024-12-10 12:36:10.129246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.129266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.137915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee73e0 00:27:48.005 [2024-12-10 12:36:10.138987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.139007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:48.005 [2024-12-10 12:36:10.146805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eed920 00:27:48.005 [2024-12-10 12:36:10.147841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.005 [2024-12-10 12:36:10.147861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:48.006 [2024-12-10 12:36:10.155997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee3498 00:27:48.006 [2024-12-10 12:36:10.156972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.006 [2024-12-10 12:36:10.156992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:48.006 [2024-12-10 12:36:10.166334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee01f8 00:27:48.006 [2024-12-10 12:36:10.167673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.006 [2024-12-10 12:36:10.167693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.173272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef8e88 00:27:48.265 [2024-12-10 12:36:10.173995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.174016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.184717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efb480 00:27:48.265 [2024-12-10 12:36:10.185926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.185946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.193475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee88f8 00:27:48.265 [2024-12-10 12:36:10.194410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.194430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.202630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eee190 00:27:48.265 [2024-12-10 12:36:10.203523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.203542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.212277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee23b8 00:27:48.265 [2024-12-10 12:36:10.213375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.213395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.221902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eea680 00:27:48.265 [2024-12-10 12:36:10.223128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.223148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.231495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee4140 00:27:48.265 [2024-12-10 12:36:10.232846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.232865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.241122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eea248 00:27:48.265 [2024-12-10 12:36:10.242535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.242555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.247604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee88f8 00:27:48.265 [2024-12-10 12:36:10.248186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.248205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.258335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee73e0 00:27:48.265 [2024-12-10 12:36:10.259432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.259451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.268041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef1430 00:27:48.265 [2024-12-10 12:36:10.269285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.269305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.276138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efc560 00:27:48.265 [2024-12-10 12:36:10.276681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.276700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.285455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eecc78 00:27:48.265 [2024-12-10 12:36:10.286240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.286259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.295302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee5a90 00:27:48.265 [2024-12-10 12:36:10.296409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.296428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.304667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edfdc0 00:27:48.265 [2024-12-10 12:36:10.305336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.305355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.265 [2024-12-10 12:36:10.313361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee23b8 00:27:48.265 [2024-12-10 12:36:10.314605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.265 [2024-12-10 12:36:10.314624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.323239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef5be8 00:27:48.266 [2024-12-10 12:36:10.324327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.324347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.332446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eee5c8 00:27:48.266 [2024-12-10 12:36:10.333446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.333466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.342079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eea248 00:27:48.266 [2024-12-10 12:36:10.343126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.343145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.351799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef6890 00:27:48.266 [2024-12-10 12:36:10.353097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.353117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.361438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef5be8 00:27:48.266 [2024-12-10 12:36:10.362855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.362879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.367938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edfdc0 00:27:48.266 [2024-12-10 12:36:10.368597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.368618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.379881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee6fa8 00:27:48.266 [2024-12-10 12:36:10.381429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.381448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.386818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7970 00:27:48.266 [2024-12-10 12:36:10.387591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.387610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.398103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef46d0 00:27:48.266 [2024-12-10 12:36:10.399371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.399391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.407713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef35f0 00:27:48.266 [2024-12-10 12:36:10.409018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.409037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.416873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eed4e8 00:27:48.266 [2024-12-10 12:36:10.418185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.418205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:48.266 [2024-12-10 12:36:10.423397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee8088 00:27:48.266 [2024-12-10 12:36:10.424049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.266 [2024-12-10 12:36:10.424068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.434855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee01f8 00:27:48.526 [2024-12-10 12:36:10.435926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.435948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.443644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef9b30 00:27:48.526 [2024-12-10 12:36:10.444491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.444512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.452233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee8d30 00:27:48.526 [2024-12-10 12:36:10.452952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.452972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.461925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeaab8 00:27:48.526 [2024-12-10 12:36:10.462658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.462679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.471135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eea248 00:27:48.526 [2024-12-10 12:36:10.471864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.471884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.480306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef6458 00:27:48.526 [2024-12-10 12:36:10.481029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.481048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.488914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee4578 00:27:48.526 [2024-12-10 12:36:10.489635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.489654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.498585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee12d8 00:27:48.526 [2024-12-10 12:36:10.499539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.499559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.510038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee1f80 00:27:48.526 [2024-12-10 12:36:10.511467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.511486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.516708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef81e0 00:27:48.526 [2024-12-10 12:36:10.517390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.517409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.526407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef6458 00:27:48.526 [2024-12-10 12:36:10.527137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.527168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.536023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edfdc0 00:27:48.526 [2024-12-10 12:36:10.536970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.536991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.547442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efd208 00:27:48.526 [2024-12-10 12:36:10.548884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.548904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.554042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeb760 00:27:48.526 [2024-12-10 12:36:10.554763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.554783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.565410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eedd58 00:27:48.526 [2024-12-10 12:36:10.566641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.566660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.574681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eddc00 00:27:48.526 [2024-12-10 12:36:10.575467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.575486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.583817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef6cc8 00:27:48.526 [2024-12-10 12:36:10.584845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.584865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.593482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee23b8 00:27:48.526 [2024-12-10 12:36:10.594835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.594855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.601833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee84c0 00:27:48.526 [2024-12-10 12:36:10.603190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.603217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.609710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee7818 00:27:48.526 [2024-12-10 12:36:10.610451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.610471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.619074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eefae0 00:27:48.526 [2024-12-10 12:36:10.619824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.619843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.627782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eecc78 00:27:48.526 [2024-12-10 12:36:10.628364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.628384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.636871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeee38 00:27:48.526 [2024-12-10 12:36:10.637429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.637448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.646722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef2510 00:27:48.526 [2024-12-10 12:36:10.647264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.647284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.657074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efe720 00:27:48.526 [2024-12-10 12:36:10.658118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.658138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.666775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eefae0 00:27:48.526 [2024-12-10 12:36:10.667953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.526 [2024-12-10 12:36:10.667973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:48.526 [2024-12-10 12:36:10.675877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee8088 00:27:48.526 [2024-12-10 12:36:10.677123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.527 [2024-12-10 12:36:10.677143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:48.527 [2024-12-10 12:36:10.685340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eeff18 00:27:48.527 [2024-12-10 12:36:10.686590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.527 [2024-12-10 12:36:10.686615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.693447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee2c28 00:27:48.786 [2024-12-10 12:36:10.694832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.694869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.703241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eef270 00:27:48.786 [2024-12-10 12:36:10.703993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.704013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.712101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef4f40 00:27:48.786 [2024-12-10 12:36:10.712835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.712855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.720939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016efc560 00:27:48.786 [2024-12-10 12:36:10.721631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.721651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.730152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee73e0 00:27:48.786 [2024-12-10 12:36:10.730833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.730853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.739238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee73e0 00:27:48.786 [2024-12-10 12:36:10.739921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.739942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.748373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7970 00:27:48.786 [2024-12-10 12:36:10.749038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.749057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.758964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7970 00:27:48.786 [2024-12-10 12:36:10.760110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.760130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.766995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee49b0 00:27:48.786 [2024-12-10 12:36:10.767664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.767684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.776448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edf550 00:27:48.786 [2024-12-10 12:36:10.777309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.777330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.786182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef4b08 00:27:48.786 [2024-12-10 12:36:10.787279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.787299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.795814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee3d08 00:27:48.786 [2024-12-10 12:36:10.797034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.786 [2024-12-10 12:36:10.797053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:48.786 [2024-12-10 12:36:10.803926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eecc78 00:27:48.787 [2024-12-10 12:36:10.804472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.804492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.813269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef1430 00:27:48.787 [2024-12-10 12:36:10.814110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.814129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.822878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee6fa8 00:27:48.787 [2024-12-10 12:36:10.823875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.823895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.833036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee1710 00:27:48.787 [2024-12-10 12:36:10.834365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.834385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.840408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7970 00:27:48.787 [2024-12-10 12:36:10.841299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.841319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.849207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee3d08 00:27:48.787 [2024-12-10 12:36:10.849942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.849961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.858148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef0bc0 00:27:48.787 [2024-12-10 12:36:10.858867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.858886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.867509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef7970 00:27:48.787 [2024-12-10 12:36:10.868255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.868274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.876450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016eed0b0 00:27:48.787 [2024-12-10 12:36:10.877153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.877175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.885783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee6fa8 00:27:48.787 [2024-12-10 12:36:10.886515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.886534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.894786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee49b0 00:27:48.787 [2024-12-10 12:36:10.895482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.895501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.904104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef6cc8 00:27:48.787 [2024-12-10 12:36:10.904821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.904840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.913061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee7818 00:27:48.787 [2024-12-10 12:36:10.913756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.913775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.922691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ede470 00:27:48.787 [2024-12-10 12:36:10.923523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.923546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.933831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ef4f40 00:27:48.787 [2024-12-10 12:36:10.935048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.935067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:48.787 [2024-12-10 12:36:10.942439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee7818 00:27:48.787 [2024-12-10 12:36:10.943539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:48.787 [2024-12-10 12:36:10.943557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:49.046 [2024-12-10 12:36:10.952021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ee4140 00:27:49.046 [2024-12-10 12:36:10.953124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.046 [2024-12-10 12:36:10.953144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:49.046 [2024-12-10 12:36:10.960735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016ede8a8 00:27:49.046 [2024-12-10 12:36:10.962035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.046 [2024-12-10 12:36:10.962055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:49.046 [2024-12-10 12:36:10.969263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdade10) with pdu=0x200016edf988 00:27:49.046 [2024-12-10 12:36:10.970014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:49.046 [2024-12-10 12:36:10.970033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:49.046 27537.50 IOPS, 107.57 MiB/s 00:27:49.046 Latency(us) 00:27:49.046 [2024-12-10T11:36:11.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.046 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:49.046 nvme0n1 : 2.01 27547.01 107.61 0.00 0.00 4639.97 1866.35 13392.14 00:27:49.046 [2024-12-10T11:36:11.214Z] =================================================================================================================== 00:27:49.046 [2024-12-10T11:36:11.214Z] Total : 27547.01 107.61 0.00 0.00 4639.97 1866.35 13392.14 00:27:49.046 { 00:27:49.046 "results": [ 00:27:49.046 { 00:27:49.046 "job": "nvme0n1", 00:27:49.046 "core_mask": "0x2", 00:27:49.046 "workload": "randwrite", 00:27:49.046 "status": "finished", 00:27:49.046 "queue_depth": 128, 00:27:49.046 "io_size": 4096, 00:27:49.046 "runtime": 2.006243, 00:27:49.046 "iops": 27547.012002035644, 00:27:49.046 "mibps": 107.60551563295174, 00:27:49.046 "io_failed": 0, 00:27:49.046 "io_timeout": 0, 00:27:49.046 "avg_latency_us": 4639.974352027113, 00:27:49.046 "min_latency_us": 1866.351304347826, 00:27:49.046 "max_latency_us": 13392.139130434784 00:27:49.046 } 00:27:49.046 ], 00:27:49.046 "core_count": 1 00:27:49.046 } 00:27:49.046 12:36:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:49.046 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:49.046 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:49.046 | .driver_specific 00:27:49.046 | .nvme_error 00:27:49.046 | .status_code 00:27:49.046 | .command_transient_transport_error' 00:27:49.046 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:49.046 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:27:49.046 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1779383 00:27:49.046 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1779383 ']' 00:27:49.046 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1779383 00:27:49.046 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:49.305 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.305 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1779383 00:27:49.305 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:49.305 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:49.305 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1779383' 00:27:49.305 killing process with pid 1779383 00:27:49.305 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1779383 00:27:49.305 Received shutdown signal, test time was about 2.000000 seconds 00:27:49.305 00:27:49.305 Latency(us) 00:27:49.306 [2024-12-10T11:36:11.474Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.306 [2024-12-10T11:36:11.474Z] =================================================================================================================== 00:27:49.306 [2024-12-10T11:36:11.474Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1779383 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1779861 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1779861 /var/tmp/bperf.sock 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1779861 ']' 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:49.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.306 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.306 [2024-12-10 12:36:11.468222] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:27:49.306 [2024-12-10 12:36:11.468269] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779861 ] 00:27:49.306 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:49.306 Zero copy mechanism will not be used. 00:27:49.565 [2024-12-10 12:36:11.543095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.565 [2024-12-10 12:36:11.580011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.565 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:49.565 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:49.565 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:49.565 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:49.824 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:49.824 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.824 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:49.824 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.824 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.824 12:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:50.083 nvme0n1 00:27:50.083 12:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:50.083 12:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.083 12:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:50.083 12:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.083 12:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:50.083 12:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:50.083 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:50.083 Zero copy mechanism will not be used. 00:27:50.083 Running I/O for 2 seconds... 00:27:50.343 [2024-12-10 12:36:12.251788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.251880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.251911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.258242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.258309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.258333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.262845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.262918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.262939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.267452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.267520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.267542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.271990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.272045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.272065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.276760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.276818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.276839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.281648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.281705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.281727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.287116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.287174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.287194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.292978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.293045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.293069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.298370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.298425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.298445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.304326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.304448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.304467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.310319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.310395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.310415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.343 [2024-12-10 12:36:12.315504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.343 [2024-12-10 12:36:12.315560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.343 [2024-12-10 12:36:12.315580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.321193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.321249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.321269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.326495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.326551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.326570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.331954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.332009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.332029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.336946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.337252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.337274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.342288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.342574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.342595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.347214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.347478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.347500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.352371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.352621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.352646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.357564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.357823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.357843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.362340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.362613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.362633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.367422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.367686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.367707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.372761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.373019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.373039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.377497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.377758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.377779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.382235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.382502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.382522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.387070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.387341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.387362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.391944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.392171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.392192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.396990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.397243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.397264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.401581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.401841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.401862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.406510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.406747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.406768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.412535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.412776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.412796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.417336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.417592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.417612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.421666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.421926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.421946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.426925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.427303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.427324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.433551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.433819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.433840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.439755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.440058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.440079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.445804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.446144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.446170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.451761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.452097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.452118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.457884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.458215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.344 [2024-12-10 12:36:12.458236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.344 [2024-12-10 12:36:12.463873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.344 [2024-12-10 12:36:12.464228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.345 [2024-12-10 12:36:12.464248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.345 [2024-12-10 12:36:12.470174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.345 [2024-12-10 12:36:12.470514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.345 [2024-12-10 12:36:12.470535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.345 [2024-12-10 12:36:12.476392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.345 [2024-12-10 12:36:12.476741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.345 [2024-12-10 12:36:12.476761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.345 [2024-12-10 12:36:12.482520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.345 [2024-12-10 12:36:12.482854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.345 [2024-12-10 12:36:12.482875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.345 [2024-12-10 12:36:12.488872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.345 [2024-12-10 12:36:12.489216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.345 [2024-12-10 12:36:12.489237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.345 [2024-12-10 12:36:12.494961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.345 [2024-12-10 12:36:12.495315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.345 [2024-12-10 12:36:12.495340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.345 [2024-12-10 12:36:12.501303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.345 [2024-12-10 12:36:12.501642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.345 [2024-12-10 12:36:12.501663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.345 [2024-12-10 12:36:12.507484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.345 [2024-12-10 12:36:12.507846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.507868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.513655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.514004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.514027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.519826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.520115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.520137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.525766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.526094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.526115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.531925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.532230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.532252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.538361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.538645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.538665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.543795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.544055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.544076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.548240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.548502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.548523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.553073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.553331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.553352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.557859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.558122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.558142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.562395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.562654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.562675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.567186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.567433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.567454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.571819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.572115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.572135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.576778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.577046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.577066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.581300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.581565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.581586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.585911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.586179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.586200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.590522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.590776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.590796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.595372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.595636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.595656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.599854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.600110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.600131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.604391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.604690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.604710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.609070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.609338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.609359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.613677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.613945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.613966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.618286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.618540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.618561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.622957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.623227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.623248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.627572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.627856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.627880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.632054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.632320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.632340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.636573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.636833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.636853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.605 [2024-12-10 12:36:12.642290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.605 [2024-12-10 12:36:12.642623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.605 [2024-12-10 12:36:12.642644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.647712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.648008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.648028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.653238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.653549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.653569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.659593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.659881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.659901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.666062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.666411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.666431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.672726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.672980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.673000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.677990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.678256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.678277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.682718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.682971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.682992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.687093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.687356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.687377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.691439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.691692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.691713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.695642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.695900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.695920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.699953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.700218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.700238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.704730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.704990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.705010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.709750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.710001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.710021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.714769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.715025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.715046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.719536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.719795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.719815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.723890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.724146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.724173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.728466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.728556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.728575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.732692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.732948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.732970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.737242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.737500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.737520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.741492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.741758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.741778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.745700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.745961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.745981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.750164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.750418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.750438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.754792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.755043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.755067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.759842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.760085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.760122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.606 [2024-12-10 12:36:12.764931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.606 [2024-12-10 12:36:12.765192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.606 [2024-12-10 12:36:12.765214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.770244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.770528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.770549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.776017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.776292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.776313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.781257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.781499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.781520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.786234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.786482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.786502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.790752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.791005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.791026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.795111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.795365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.795386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.799327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.799584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.799604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.803609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.803868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.803889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.807922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.808173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.808209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.812406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.812668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.812689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.816768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.817030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.817050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.820989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.821257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.821278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.825365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.825621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.825642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.867 [2024-12-10 12:36:12.829841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.867 [2024-12-10 12:36:12.830104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.867 [2024-12-10 12:36:12.830125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.834810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.835075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.835095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.839852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.840123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.840144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.844266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.844525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.844546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.848557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.848813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.848833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.852940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.853207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.853227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.857186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.857448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.857469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.861318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.861587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.861607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.865497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.865771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.865791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.869636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.869900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.869921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.873723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.873990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.874014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.877832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.878097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.878117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.881890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.882153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.882180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.885966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.886229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.886249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.890003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.890277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.890297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.894040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.894312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.894332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.898096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.898365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.898386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.902166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.902429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.902449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.906238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.906509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.906529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.910317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.910583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.910604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.914359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.914620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.914640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.918419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.918681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.918701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.922488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.922758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.922778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.926603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.926862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.926882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.930644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.930907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.930927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.934702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.934964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.934986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.938772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.939035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.939055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.942823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.943091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.868 [2024-12-10 12:36:12.943111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.868 [2024-12-10 12:36:12.946886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.868 [2024-12-10 12:36:12.947147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.947173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.950942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.951204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.951225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.955279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.955550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.955570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.959873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.960127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.960148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.964932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.965193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.965213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.969390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.969657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.969677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.973788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.974053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.974074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.978196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.978447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.978467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.982598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.982851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.982874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.986726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.986993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.987013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.991071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.991340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.991360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:12.995461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:12.995717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:12.995738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:13.000719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:13.000958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:13.000978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:13.005925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:13.006469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:13.006490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:13.010747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:13.011025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:13.011046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:13.015204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:13.015464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:13.015486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:13.020473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:13.020724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:13.020745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:13.025252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:13.025524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:13.025548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:50.869 [2024-12-10 12:36:13.029845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:50.869 [2024-12-10 12:36:13.030108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.869 [2024-12-10 12:36:13.030130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.129 [2024-12-10 12:36:13.034378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.129 [2024-12-10 12:36:13.034635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-12-10 12:36:13.034656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.129 [2024-12-10 12:36:13.038708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.129 [2024-12-10 12:36:13.038971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-12-10 12:36:13.039003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.129 [2024-12-10 12:36:13.043070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.129 [2024-12-10 12:36:13.043331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-12-10 12:36:13.043351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.129 [2024-12-10 12:36:13.047588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.129 [2024-12-10 12:36:13.047845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-12-10 12:36:13.047865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.129 [2024-12-10 12:36:13.051923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.129 [2024-12-10 12:36:13.052201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-12-10 12:36:13.052221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.129 [2024-12-10 12:36:13.056457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.129 [2024-12-10 12:36:13.056712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-12-10 12:36:13.056733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.129 [2024-12-10 12:36:13.060622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.129 [2024-12-10 12:36:13.060878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-12-10 12:36:13.060898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.129 [2024-12-10 12:36:13.064745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.129 [2024-12-10 12:36:13.064995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.129 [2024-12-10 12:36:13.065016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.129 [2024-12-10 12:36:13.068892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.069146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.069172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.073029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.073304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.073325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.077128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.077401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.077422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.081295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.081549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.081569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.085651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.085914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.085934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.089966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.090235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.090255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.094167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.094433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.094455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.098804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.099074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.099094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.103614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.103872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.103893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.108842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.109098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.109119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.113703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.113972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.113992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.118175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.118428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.118448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.122586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.122851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.122871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.127074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.127350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.127370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.131407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.131660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.131681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.135950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.136207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.136226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.140552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.140817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.140841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.145094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.145414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.145434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.150512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.150778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.150799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.155871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.156165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.156185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.162399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.162704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.162726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.169438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.169810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.169832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.176372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.176679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.176700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.183306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.183637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.183657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.189867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.190222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.190243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.196718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.197050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.197071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.203528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.203775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.203795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.130 [2024-12-10 12:36:13.211020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.130 [2024-12-10 12:36:13.211385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.130 [2024-12-10 12:36:13.211406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.217924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.218291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.218312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.224977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.225354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.225375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.232069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.232328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.232347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.239064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.239375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.239396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.245727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.246046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.246067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.131 6206.00 IOPS, 775.75 MiB/s [2024-12-10T11:36:13.299Z] [2024-12-10 12:36:13.253479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.253756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.253776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.258198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.258451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.258471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.262729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.262978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.262999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.267265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.267541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.267562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.271859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.272132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.272153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.276366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.276635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.276655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.280838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.281109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.281130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.285367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.285636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.285656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.131 [2024-12-10 12:36:13.289935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.131 [2024-12-10 12:36:13.290207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.131 [2024-12-10 12:36:13.290228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.391 [2024-12-10 12:36:13.294618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.391 [2024-12-10 12:36:13.294898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.391 [2024-12-10 12:36:13.294927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.391 [2024-12-10 12:36:13.299130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.391 [2024-12-10 12:36:13.299405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.391 [2024-12-10 12:36:13.299427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.391 [2024-12-10 12:36:13.303565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.303829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.303850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.308126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.308405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.308426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.313045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.313307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.313329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.317563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.317834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.317854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.322074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.322360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.322382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.326546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.326821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.326841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.331061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.331336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.331357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.335498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.335767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.335788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.339996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.340274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.340294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.344256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.344511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.344532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.348693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.348962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.348983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.353494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.353763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.353784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.358564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.358825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.358846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.363337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.363591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.363612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.367887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.368140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.368166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.372228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.372497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.372517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.376664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.376935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.376956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.381101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.381370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.381390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.385536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.385795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.385815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.390090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.390359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.390379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.394391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.394662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.394682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.398877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.399155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.399181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.403671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.403937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.403958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.408611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.408873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.408893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.413840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.414104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.414129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.419095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.419362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.419383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.424003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.424269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.424289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.392 [2024-12-10 12:36:13.428491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.392 [2024-12-10 12:36:13.428750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.392 [2024-12-10 12:36:13.428770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.432862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.433119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.433139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.437106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.437383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.437405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.441536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.441799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.441820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.445944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.446217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.446238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.450347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.450615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.450635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.454894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.455166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.455188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.459230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.459493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.459514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.463884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.464179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.464200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.469304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.469559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.469580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.474431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.474683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.474703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.479393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.479642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.479663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.484874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.485131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.485152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.489716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.489964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.489985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.494090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.494367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.494387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.498458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.498728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.498749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.502672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.502937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.502957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.507103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.507377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.507400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.511568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.511833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.511854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.516028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.516293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.516329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.520470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.520741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.520763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.524711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.524983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.525004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.529204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.529470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.529491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.533963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.534233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.534257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.538959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.539210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.539230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.544355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.544635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.544656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.549520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.549787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.549809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.393 [2024-12-10 12:36:13.554511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.393 [2024-12-10 12:36:13.554780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.393 [2024-12-10 12:36:13.554802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.559116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.559386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.559407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.563747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.564002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.564024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.568718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.568977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.568998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.573473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.573743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.573764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.578039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.578313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.578334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.582486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.582752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.582773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.587812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.588050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.588071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.592749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.593022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.593043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.597756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.597993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.598013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.602567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.602818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.602838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.607599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.607854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.607875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.612208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.612469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.612490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.616679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.616936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.616956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.621031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.621323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.621344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.625322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.625576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.625597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.629825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.630100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.630121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.654 [2024-12-10 12:36:13.634373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.654 [2024-12-10 12:36:13.634627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.654 [2024-12-10 12:36:13.634648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.638699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.638966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.638987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.643080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.643351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.643372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.647614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.647881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.647902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.652722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.652962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.652983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.657816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.658072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.658096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.662748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.663009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.663030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.667306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.667567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.667588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.671955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.672226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.672247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.676560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.676811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.676832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.681039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.681305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.681326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.685340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.685591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.685612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.689705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.689974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.689995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.694132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.694410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.694431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.698698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.698966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.698986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.703856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.704115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.704136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.708854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.709099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.709120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.713797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.714069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.714090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.719468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.719732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.719752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.724475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.724736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.724756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.730189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.730457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.730477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.735149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.735408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.735429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.739588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.739855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.739876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.743832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.744090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.744110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.748073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.748329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.748350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.752277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.752545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.752566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.756482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.756753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.756773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.655 [2024-12-10 12:36:13.761256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.655 [2024-12-10 12:36:13.761509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.655 [2024-12-10 12:36:13.761530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.656 [2024-12-10 12:36:13.767144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.656 [2024-12-10 12:36:13.767518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.656 [2024-12-10 12:36:13.767539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.656 [2024-12-10 12:36:13.773514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.656 [2024-12-10 12:36:13.773876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.656 [2024-12-10 12:36:13.773897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.656 [2024-12-10 12:36:13.779767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.656 [2024-12-10 12:36:13.780118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.656 [2024-12-10 12:36:13.780139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.656 [2024-12-10 12:36:13.786186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.656 [2024-12-10 12:36:13.786487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.656 [2024-12-10 12:36:13.786511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.656 [2024-12-10 12:36:13.792265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.656 [2024-12-10 12:36:13.792611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.656 [2024-12-10 12:36:13.792632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.656 [2024-12-10 12:36:13.798386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.656 [2024-12-10 12:36:13.798719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.656 [2024-12-10 12:36:13.798740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.656 [2024-12-10 12:36:13.804746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.656 [2024-12-10 12:36:13.805104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.656 [2024-12-10 12:36:13.805125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.656 [2024-12-10 12:36:13.811520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.656 [2024-12-10 12:36:13.811834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.656 [2024-12-10 12:36:13.811854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.656 [2024-12-10 12:36:13.817718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.656 [2024-12-10 12:36:13.818077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.656 [2024-12-10 12:36:13.818098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.916 [2024-12-10 12:36:13.823837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.916 [2024-12-10 12:36:13.824223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.916 [2024-12-10 12:36:13.824245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.916 [2024-12-10 12:36:13.830202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.916 [2024-12-10 12:36:13.830462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.916 [2024-12-10 12:36:13.830484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.916 [2024-12-10 12:36:13.836366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.916 [2024-12-10 12:36:13.836699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.916 [2024-12-10 12:36:13.836720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.916 [2024-12-10 12:36:13.842820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.916 [2024-12-10 12:36:13.843185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.916 [2024-12-10 12:36:13.843210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.916 [2024-12-10 12:36:13.849304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.916 [2024-12-10 12:36:13.849581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.916 [2024-12-10 12:36:13.849602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.855281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.855563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.855584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.860097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.860359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.860380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.865010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.865268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.865289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.869891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.870152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.870179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.874818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.875109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.875130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.879930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.880183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.880204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.884609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.884857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.884878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.889554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.889828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.889849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.895588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.895955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.895976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.901117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.901390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.901411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.906249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.906497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.906518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.911204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.911472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.911492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.915974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.916265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.916285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.921646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.921941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.921961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.927544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.927819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.927839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.933304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.933650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.933671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.940590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.940887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.940908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.945911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.946173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.946194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.950466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.950724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.950745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.955000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.955267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.955288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.959520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.959783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.959803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.963868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.964123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.964145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.968374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.968643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.968664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.972887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.973143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.973169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.977256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.977509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.977533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.981707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.981974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.981994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.986502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.986750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.917 [2024-12-10 12:36:13.986771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.917 [2024-12-10 12:36:13.991508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.917 [2024-12-10 12:36:13.991762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:13.991783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:13.996618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:13.996880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:13.996901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.001682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.001940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.001961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.006945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.007190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.007210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.011890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.012142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.012168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.017024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.017281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.017302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.021729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.021989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.022011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.026413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.026672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.026692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.031834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.032101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.032121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.036871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.037141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.037167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.041634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.041891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.041911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.046218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.046473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.046494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.050646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.050898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.050918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.055051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.055315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.055336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.059599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.059866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.059886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.064050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.064336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.064357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.068618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.068877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.068897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.072991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.073261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.073282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:51.918 [2024-12-10 12:36:14.077483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:51.918 [2024-12-10 12:36:14.077751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:51.918 [2024-12-10 12:36:14.077772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.178 [2024-12-10 12:36:14.082122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.178 [2024-12-10 12:36:14.082394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.178 [2024-12-10 12:36:14.082415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.178 [2024-12-10 12:36:14.086454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.178 [2024-12-10 12:36:14.086738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.178 [2024-12-10 12:36:14.086760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.178 [2024-12-10 12:36:14.090830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.178 [2024-12-10 12:36:14.091081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.178 [2024-12-10 12:36:14.091102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.178 [2024-12-10 12:36:14.095204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.178 [2024-12-10 12:36:14.095452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.178 [2024-12-10 12:36:14.095472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.178 [2024-12-10 12:36:14.099578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.178 [2024-12-10 12:36:14.099834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.178 [2024-12-10 12:36:14.099858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.178 [2024-12-10 12:36:14.103912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.178 [2024-12-10 12:36:14.104174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.178 [2024-12-10 12:36:14.104195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.108273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.108533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.108554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.112575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.112844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.112865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.116823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.117086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.117107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.121080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.121336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.121357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.125356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.125612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.125632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.129676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.129944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.129964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.133977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.134247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.134268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.138216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.138471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.138492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.142421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.142684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.142704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.146678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.146945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.146966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.150913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.151181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.151202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.155150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.155417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.155438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.159399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.159667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.159688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.163636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.163901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.163922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.167873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.168135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.168156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.172232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.172503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.172524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.176792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.177040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.177060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.181453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.181714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.181735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.186724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.186969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.186989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.192130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.192400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.192421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.196517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.196781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.196801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.200999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.201260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.201282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.205571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.205833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.205854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.209949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.210219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.210240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.214230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.214481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.214505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.218687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.218938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.218958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.223664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.223912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.223933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.179 [2024-12-10 12:36:14.228501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.179 [2024-12-10 12:36:14.228754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.179 [2024-12-10 12:36:14.228774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.180 [2024-12-10 12:36:14.233219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.180 [2024-12-10 12:36:14.233466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.180 [2024-12-10 12:36:14.233487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.180 [2024-12-10 12:36:14.237621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.180 [2024-12-10 12:36:14.237868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.180 [2024-12-10 12:36:14.237889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:52.180 [2024-12-10 12:36:14.242055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.180 [2024-12-10 12:36:14.242307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.180 [2024-12-10 12:36:14.242328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:52.180 [2024-12-10 12:36:14.246552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.180 [2024-12-10 12:36:14.246799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.180 [2024-12-10 12:36:14.246819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:52.180 6338.00 IOPS, 792.25 MiB/s [2024-12-10T11:36:14.348Z] [2024-12-10 12:36:14.251925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xdae150) with pdu=0x200016eff3c8 00:27:52.180 [2024-12-10 12:36:14.252045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:52.180 [2024-12-10 12:36:14.252063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:52.180 00:27:52.180 Latency(us) 00:27:52.180 [2024-12-10T11:36:14.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.180 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:52.180 nvme0n1 : 2.00 6336.33 792.04 0.00 0.00 2520.95 1823.61 9972.87 00:27:52.180 [2024-12-10T11:36:14.348Z] =================================================================================================================== 00:27:52.180 [2024-12-10T11:36:14.348Z] Total : 6336.33 792.04 0.00 0.00 2520.95 1823.61 9972.87 00:27:52.180 { 00:27:52.180 "results": [ 00:27:52.180 { 00:27:52.180 "job": "nvme0n1", 00:27:52.180 "core_mask": "0x2", 00:27:52.180 "workload": "randwrite", 00:27:52.180 "status": "finished", 00:27:52.180 "queue_depth": 16, 00:27:52.180 "io_size": 131072, 00:27:52.180 "runtime": 2.003526, 00:27:52.180 "iops": 6336.329051881533, 00:27:52.180 "mibps": 792.0411314851916, 00:27:52.180 "io_failed": 0, 00:27:52.180 "io_timeout": 0, 00:27:52.180 "avg_latency_us": 2520.947052485573, 00:27:52.180 "min_latency_us": 1823.6104347826088, 00:27:52.180 "max_latency_us": 9972.869565217392 00:27:52.180 } 00:27:52.180 ], 00:27:52.180 "core_count": 1 00:27:52.180 } 00:27:52.180 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:52.180 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:52.180 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:52.180 | .driver_specific 00:27:52.180 | .nvme_error 00:27:52.180 | .status_code 00:27:52.180 | .command_transient_transport_error' 00:27:52.180 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 410 > 0 )) 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1779861 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1779861 ']' 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1779861 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1779861 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1779861' 00:27:52.439 killing process with pid 1779861 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1779861 00:27:52.439 Received shutdown signal, test time was about 2.000000 seconds 00:27:52.439 00:27:52.439 Latency(us) 00:27:52.439 [2024-12-10T11:36:14.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.439 [2024-12-10T11:36:14.607Z] =================================================================================================================== 00:27:52.439 [2024-12-10T11:36:14.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:52.439 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1779861 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1777730 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1777730 ']' 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1777730 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1777730 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1777730' 00:27:52.698 killing process with pid 1777730 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1777730 00:27:52.698 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1777730 00:27:52.958 00:27:52.958 real 0m14.018s 00:27:52.958 user 0m26.921s 00:27:52.958 sys 0m4.496s 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:52.958 ************************************ 00:27:52.958 END TEST nvmf_digest_error 00:27:52.958 ************************************ 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:52.958 rmmod nvme_tcp 00:27:52.958 rmmod nvme_fabrics 00:27:52.958 rmmod nvme_keyring 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1777730 ']' 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1777730 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1777730 ']' 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1777730 00:27:52.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (1777730) - No such process 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1777730 is not found' 00:27:52.958 Process with pid 1777730 is not found 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:52.958 12:36:14 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.958 12:36:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.958 12:36:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:52.958 12:36:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:52.958 12:36:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.958 12:36:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.958 12:36:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.958 12:36:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:52.958 12:36:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.958 12:36:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.958 12:36:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:55.494 00:27:55.494 real 0m36.736s 00:27:55.494 user 0m56.331s 00:27:55.494 sys 0m13.649s 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:55.494 ************************************ 00:27:55.494 END TEST nvmf_digest 00:27:55.494 ************************************ 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.494 ************************************ 00:27:55.494 START TEST nvmf_bdevperf 00:27:55.494 ************************************ 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:55.494 * Looking for test storage... 00:27:55.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:55.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.494 --rc genhtml_branch_coverage=1 00:27:55.494 --rc genhtml_function_coverage=1 00:27:55.494 --rc genhtml_legend=1 00:27:55.494 --rc geninfo_all_blocks=1 00:27:55.494 --rc geninfo_unexecuted_blocks=1 00:27:55.494 00:27:55.494 ' 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:55.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.494 --rc genhtml_branch_coverage=1 00:27:55.494 --rc genhtml_function_coverage=1 00:27:55.494 --rc genhtml_legend=1 00:27:55.494 --rc geninfo_all_blocks=1 00:27:55.494 --rc geninfo_unexecuted_blocks=1 00:27:55.494 00:27:55.494 ' 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:55.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.494 --rc genhtml_branch_coverage=1 00:27:55.494 --rc genhtml_function_coverage=1 00:27:55.494 --rc genhtml_legend=1 00:27:55.494 --rc geninfo_all_blocks=1 00:27:55.494 --rc geninfo_unexecuted_blocks=1 00:27:55.494 00:27:55.494 ' 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:55.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:55.494 --rc genhtml_branch_coverage=1 00:27:55.494 --rc genhtml_function_coverage=1 00:27:55.494 --rc genhtml_legend=1 00:27:55.494 --rc geninfo_all_blocks=1 00:27:55.494 --rc geninfo_unexecuted_blocks=1 00:27:55.494 00:27:55.494 ' 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.494 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:55.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:55.495 12:36:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.066 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:02.067 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:02.067 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:02.067 Found net devices under 0000:86:00.0: cvl_0_0 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:02.067 Found net devices under 0000:86:00.1: cvl_0_1 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:02.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:02.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:28:02.067 00:28:02.067 --- 10.0.0.2 ping statistics --- 00:28:02.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.067 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:02.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:02.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:28:02.067 00:28:02.067 --- 10.0.0.1 ping statistics --- 00:28:02.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:02.067 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:02.067 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1783868 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1783868 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1783868 ']' 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 [2024-12-10 12:36:23.366871] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:02.068 [2024-12-10 12:36:23.366915] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.068 [2024-12-10 12:36:23.447024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:02.068 [2024-12-10 12:36:23.488536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.068 [2024-12-10 12:36:23.488572] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.068 [2024-12-10 12:36:23.488580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.068 [2024-12-10 12:36:23.488587] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.068 [2024-12-10 12:36:23.488592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.068 [2024-12-10 12:36:23.490027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.068 [2024-12-10 12:36:23.490134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.068 [2024-12-10 12:36:23.490134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 [2024-12-10 12:36:23.626392] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 Malloc0 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:02.068 [2024-12-10 12:36:23.687769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:02.068 { 00:28:02.068 "params": { 00:28:02.068 "name": "Nvme$subsystem", 00:28:02.068 "trtype": "$TEST_TRANSPORT", 00:28:02.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.068 "adrfam": "ipv4", 00:28:02.068 "trsvcid": "$NVMF_PORT", 00:28:02.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.068 "hdgst": ${hdgst:-false}, 00:28:02.068 "ddgst": ${ddgst:-false} 00:28:02.068 }, 00:28:02.068 "method": "bdev_nvme_attach_controller" 00:28:02.068 } 00:28:02.068 EOF 00:28:02.068 )") 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:02.068 12:36:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:02.068 "params": { 00:28:02.068 "name": "Nvme1", 00:28:02.068 "trtype": "tcp", 00:28:02.068 "traddr": "10.0.0.2", 00:28:02.068 "adrfam": "ipv4", 00:28:02.068 "trsvcid": "4420", 00:28:02.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:02.068 "hdgst": false, 00:28:02.068 "ddgst": false 00:28:02.068 }, 00:28:02.068 "method": "bdev_nvme_attach_controller" 00:28:02.068 }' 00:28:02.068 [2024-12-10 12:36:23.741373] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:02.068 [2024-12-10 12:36:23.741426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1784061 ] 00:28:02.068 [2024-12-10 12:36:23.819063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.068 [2024-12-10 12:36:23.859620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.068 Running I/O for 1 seconds... 00:28:03.004 10958.00 IOPS, 42.80 MiB/s 00:28:03.004 Latency(us) 00:28:03.004 [2024-12-10T11:36:25.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.004 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:03.004 Verification LBA range: start 0x0 length 0x4000 00:28:03.004 Nvme1n1 : 1.01 10965.59 42.83 0.00 0.00 11631.07 2065.81 11112.63 00:28:03.004 [2024-12-10T11:36:25.172Z] =================================================================================================================== 00:28:03.004 [2024-12-10T11:36:25.172Z] Total : 10965.59 42.83 0.00 0.00 11631.07 2065.81 11112.63 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1784343 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:03.262 { 00:28:03.262 "params": { 00:28:03.262 "name": "Nvme$subsystem", 00:28:03.262 "trtype": "$TEST_TRANSPORT", 00:28:03.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:03.262 "adrfam": "ipv4", 00:28:03.262 "trsvcid": "$NVMF_PORT", 00:28:03.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:03.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:03.262 "hdgst": ${hdgst:-false}, 00:28:03.262 "ddgst": ${ddgst:-false} 00:28:03.262 }, 00:28:03.262 "method": "bdev_nvme_attach_controller" 00:28:03.262 } 00:28:03.262 EOF 00:28:03.262 )") 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:03.262 12:36:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:03.262 "params": { 00:28:03.262 "name": "Nvme1", 00:28:03.262 "trtype": "tcp", 00:28:03.262 "traddr": "10.0.0.2", 00:28:03.262 "adrfam": "ipv4", 00:28:03.262 "trsvcid": "4420", 00:28:03.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:03.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:03.262 "hdgst": false, 00:28:03.262 "ddgst": false 00:28:03.262 }, 00:28:03.262 "method": "bdev_nvme_attach_controller" 00:28:03.262 }' 00:28:03.262 [2024-12-10 12:36:25.319549] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:03.262 [2024-12-10 12:36:25.319600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1784343 ] 00:28:03.262 [2024-12-10 12:36:25.396356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.520 [2024-12-10 12:36:25.436081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.778 Running I/O for 15 seconds... 00:28:05.649 11098.00 IOPS, 43.35 MiB/s [2024-12-10T11:36:28.386Z] 11149.00 IOPS, 43.55 MiB/s [2024-12-10T11:36:28.386Z] 12:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1783868 00:28:06.218 12:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:06.218 [2024-12-10 12:36:28.285227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.218 [2024-12-10 12:36:28.285473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.218 [2024-12-10 12:36:28.285480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.285986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.285993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.286000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.286009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.286015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.286023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.286030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.286038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.286045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.286052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.286059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.286067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.286074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.286082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.286089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.286097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.286104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.286112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.286118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.286126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.219 [2024-12-10 12:36:28.286133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.219 [2024-12-10 12:36:28.286140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.220 [2024-12-10 12:36:28.286541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.220 [2024-12-10 12:36:28.286754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.220 [2024-12-10 12:36:28.286762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.286987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.286995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:06.221 [2024-12-10 12:36:28.287241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.287249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc510 is same with the state(6) to be set 00:28:06.221 [2024-12-10 12:36:28.287258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:06.221 [2024-12-10 12:36:28.287263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:06.221 [2024-12-10 12:36:28.287269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97752 len:8 PRP1 0x0 PRP2 0x0 00:28:06.221 [2024-12-10 12:36:28.287277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.221 [2024-12-10 12:36:28.290285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.221 [2024-12-10 12:36:28.290343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.221 [2024-12-10 12:36:28.290864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.221 [2024-12-10 12:36:28.290880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.221 [2024-12-10 12:36:28.290889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.221 [2024-12-10 12:36:28.291069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.221 [2024-12-10 12:36:28.291255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.221 [2024-12-10 12:36:28.291264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.221 [2024-12-10 12:36:28.291273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.221 [2024-12-10 12:36:28.291281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.221 [2024-12-10 12:36:28.303637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.221 [2024-12-10 12:36:28.304068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.221 [2024-12-10 12:36:28.304087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.222 [2024-12-10 12:36:28.304095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.222 [2024-12-10 12:36:28.304276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.222 [2024-12-10 12:36:28.304451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.222 [2024-12-10 12:36:28.304459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.222 [2024-12-10 12:36:28.304467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.222 [2024-12-10 12:36:28.304473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.222 [2024-12-10 12:36:28.316496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.222 [2024-12-10 12:36:28.316939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.222 [2024-12-10 12:36:28.316957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.222 [2024-12-10 12:36:28.316965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.222 [2024-12-10 12:36:28.317138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.222 [2024-12-10 12:36:28.317327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.222 [2024-12-10 12:36:28.317336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.222 [2024-12-10 12:36:28.317343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.222 [2024-12-10 12:36:28.317349] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.222 [2024-12-10 12:36:28.329411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.222 [2024-12-10 12:36:28.329843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.222 [2024-12-10 12:36:28.329859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.222 [2024-12-10 12:36:28.329866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.222 [2024-12-10 12:36:28.330039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.222 [2024-12-10 12:36:28.330224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.222 [2024-12-10 12:36:28.330233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.222 [2024-12-10 12:36:28.330239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.222 [2024-12-10 12:36:28.330245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.222 [2024-12-10 12:36:28.342328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.222 [2024-12-10 12:36:28.342727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.222 [2024-12-10 12:36:28.342743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.222 [2024-12-10 12:36:28.342750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.222 [2024-12-10 12:36:28.342913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.222 [2024-12-10 12:36:28.343076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.222 [2024-12-10 12:36:28.343084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.222 [2024-12-10 12:36:28.343090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.222 [2024-12-10 12:36:28.343095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.222 [2024-12-10 12:36:28.355202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.222 [2024-12-10 12:36:28.355618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.222 [2024-12-10 12:36:28.355634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.222 [2024-12-10 12:36:28.355641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.222 [2024-12-10 12:36:28.355813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.222 [2024-12-10 12:36:28.355985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.222 [2024-12-10 12:36:28.355993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.222 [2024-12-10 12:36:28.355999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.222 [2024-12-10 12:36:28.356005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.222 [2024-12-10 12:36:28.368088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.222 [2024-12-10 12:36:28.368512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.222 [2024-12-10 12:36:28.368528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.222 [2024-12-10 12:36:28.368536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.222 [2024-12-10 12:36:28.368712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.222 [2024-12-10 12:36:28.368885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.222 [2024-12-10 12:36:28.368893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.222 [2024-12-10 12:36:28.368899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.222 [2024-12-10 12:36:28.368905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.222 [2024-12-10 12:36:28.381229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.222 [2024-12-10 12:36:28.381676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.222 [2024-12-10 12:36:28.381693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.222 [2024-12-10 12:36:28.381700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.481 [2024-12-10 12:36:28.381879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.481 [2024-12-10 12:36:28.382057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.481 [2024-12-10 12:36:28.382066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.481 [2024-12-10 12:36:28.382073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.481 [2024-12-10 12:36:28.382078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.481 [2024-12-10 12:36:28.394095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.481 [2024-12-10 12:36:28.394521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-12-10 12:36:28.394559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.481 [2024-12-10 12:36:28.394584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.481 [2024-12-10 12:36:28.395182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.481 [2024-12-10 12:36:28.395638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.481 [2024-12-10 12:36:28.395655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.481 [2024-12-10 12:36:28.395669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.481 [2024-12-10 12:36:28.395682] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.481 [2024-12-10 12:36:28.409292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.481 [2024-12-10 12:36:28.409755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-12-10 12:36:28.409777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.481 [2024-12-10 12:36:28.409787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.481 [2024-12-10 12:36:28.410042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.481 [2024-12-10 12:36:28.410305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.481 [2024-12-10 12:36:28.410321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.481 [2024-12-10 12:36:28.410330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.481 [2024-12-10 12:36:28.410340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.481 [2024-12-10 12:36:28.422293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.481 [2024-12-10 12:36:28.422746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-12-10 12:36:28.422790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.481 [2024-12-10 12:36:28.422814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.481 [2024-12-10 12:36:28.423424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.481 [2024-12-10 12:36:28.423997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.481 [2024-12-10 12:36:28.424005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.481 [2024-12-10 12:36:28.424012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.481 [2024-12-10 12:36:28.424018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.481 [2024-12-10 12:36:28.435234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.481 [2024-12-10 12:36:28.435573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-12-10 12:36:28.435590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.481 [2024-12-10 12:36:28.435597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.481 [2024-12-10 12:36:28.435771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.481 [2024-12-10 12:36:28.435943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.481 [2024-12-10 12:36:28.435952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.481 [2024-12-10 12:36:28.435958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.481 [2024-12-10 12:36:28.435964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.481 [2024-12-10 12:36:28.448174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.481 [2024-12-10 12:36:28.448515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.481 [2024-12-10 12:36:28.448530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.481 [2024-12-10 12:36:28.448537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.481 [2024-12-10 12:36:28.448700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.481 [2024-12-10 12:36:28.448863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.481 [2024-12-10 12:36:28.448871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.481 [2024-12-10 12:36:28.448877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.448886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.461101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.461565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.461613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.461637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.462065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.462245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.462254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.462261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.462267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.474007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.474381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.474399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.474406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.474579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.474752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.474760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.474766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.474772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.486898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.487332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.487376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.487399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.487936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.488109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.488117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.488124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.488130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.499772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.500192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.500208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.500215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.500378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.500542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.500549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.500555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.500561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.512668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.513093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.513109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.513117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.513297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.513471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.513479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.513485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.513491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.525623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.526031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.526047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.526054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.526233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.526407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.526415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.526421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.526427] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.538606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.539033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.539050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.539058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.539246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.539425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.539433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.539439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.539446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.551702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.552116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.552134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.552141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.552324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.552503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.552512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.552518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.552525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.564782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.565201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.565246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.565269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.565852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.566152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.566166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.566173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.566179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.577804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.578208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.578225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.578232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.578405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.578577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.578588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.578594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.578600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.590675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.591091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.591107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.591114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.591293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.591466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.591473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.591479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.591486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.603561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.603952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.603997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.604019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.604467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.604642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.604650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.604656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.604662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.616378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.616797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.616814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.616821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.616993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.617172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.617181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.617187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.617196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.629200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.629641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.629657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.629663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.629826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.629989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.629997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.630002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.630008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.482 [2024-12-10 12:36:28.642142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.482 [2024-12-10 12:36:28.642568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.482 [2024-12-10 12:36:28.642586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.482 [2024-12-10 12:36:28.642593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.482 [2024-12-10 12:36:28.642770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.482 [2024-12-10 12:36:28.642949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.482 [2024-12-10 12:36:28.642957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.482 [2024-12-10 12:36:28.642963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.482 [2024-12-10 12:36:28.642969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.742 [2024-12-10 12:36:28.655096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.742 [2024-12-10 12:36:28.655540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-12-10 12:36:28.655584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.742 [2024-12-10 12:36:28.655608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.742 [2024-12-10 12:36:28.656204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.742 [2024-12-10 12:36:28.656659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.742 [2024-12-10 12:36:28.656667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.742 [2024-12-10 12:36:28.656673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.742 [2024-12-10 12:36:28.656679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.742 [2024-12-10 12:36:28.668025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.742 [2024-12-10 12:36:28.668449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-12-10 12:36:28.668466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.742 [2024-12-10 12:36:28.668473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.742 [2024-12-10 12:36:28.668646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.742 [2024-12-10 12:36:28.668823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.742 [2024-12-10 12:36:28.668831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.742 [2024-12-10 12:36:28.668837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.742 [2024-12-10 12:36:28.668843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.742 [2024-12-10 12:36:28.680876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.742 [2024-12-10 12:36:28.681301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-12-10 12:36:28.681318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.742 [2024-12-10 12:36:28.681325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.742 [2024-12-10 12:36:28.681498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.742 [2024-12-10 12:36:28.681671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.742 [2024-12-10 12:36:28.681679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.742 [2024-12-10 12:36:28.681685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.742 [2024-12-10 12:36:28.681691] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.742 [2024-12-10 12:36:28.693774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.742 [2024-12-10 12:36:28.694151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-12-10 12:36:28.694172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.742 [2024-12-10 12:36:28.694179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.742 [2024-12-10 12:36:28.694343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.742 [2024-12-10 12:36:28.694505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.742 [2024-12-10 12:36:28.694514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.742 [2024-12-10 12:36:28.694520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.742 [2024-12-10 12:36:28.694525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.742 [2024-12-10 12:36:28.706813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.742 [2024-12-10 12:36:28.707184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-12-10 12:36:28.707201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.742 [2024-12-10 12:36:28.707208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.742 [2024-12-10 12:36:28.707388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.742 [2024-12-10 12:36:28.707560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.742 [2024-12-10 12:36:28.707568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.742 [2024-12-10 12:36:28.707574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.742 [2024-12-10 12:36:28.707580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.742 [2024-12-10 12:36:28.719771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.742 [2024-12-10 12:36:28.720214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-12-10 12:36:28.720232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.742 [2024-12-10 12:36:28.720240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.742 [2024-12-10 12:36:28.720412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.742 [2024-12-10 12:36:28.720589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.742 [2024-12-10 12:36:28.720597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.742 [2024-12-10 12:36:28.720603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.742 [2024-12-10 12:36:28.720609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.742 9492.00 IOPS, 37.08 MiB/s [2024-12-10T11:36:28.910Z] [2024-12-10 12:36:28.734149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.742 [2024-12-10 12:36:28.734515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-12-10 12:36:28.734532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.742 [2024-12-10 12:36:28.734539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.742 [2024-12-10 12:36:28.734712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.742 [2024-12-10 12:36:28.734885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.742 [2024-12-10 12:36:28.734893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.742 [2024-12-10 12:36:28.734899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.742 [2024-12-10 12:36:28.734905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.742 [2024-12-10 12:36:28.747090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.742 [2024-12-10 12:36:28.747532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-12-10 12:36:28.747550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.742 [2024-12-10 12:36:28.747557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.742 [2024-12-10 12:36:28.747730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.742 [2024-12-10 12:36:28.747907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.742 [2024-12-10 12:36:28.747915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.742 [2024-12-10 12:36:28.747921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.742 [2024-12-10 12:36:28.747927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.742 [2024-12-10 12:36:28.759972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.742 [2024-12-10 12:36:28.760401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.742 [2024-12-10 12:36:28.760445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.742 [2024-12-10 12:36:28.760468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.742 [2024-12-10 12:36:28.760962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.742 [2024-12-10 12:36:28.761135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.742 [2024-12-10 12:36:28.761144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.761150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.761163] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.743 [2024-12-10 12:36:28.772775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.743 [2024-12-10 12:36:28.773169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-12-10 12:36:28.773185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.743 [2024-12-10 12:36:28.773191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.743 [2024-12-10 12:36:28.773354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.743 [2024-12-10 12:36:28.773518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.743 [2024-12-10 12:36:28.773525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.773531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.773537] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.743 [2024-12-10 12:36:28.785688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.743 [2024-12-10 12:36:28.786087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-12-10 12:36:28.786104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.743 [2024-12-10 12:36:28.786111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.743 [2024-12-10 12:36:28.786302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.743 [2024-12-10 12:36:28.786475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.743 [2024-12-10 12:36:28.786483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.786489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.786498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.743 [2024-12-10 12:36:28.798578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.743 [2024-12-10 12:36:28.798999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-12-10 12:36:28.799016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.743 [2024-12-10 12:36:28.799023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.743 [2024-12-10 12:36:28.799207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.743 [2024-12-10 12:36:28.799386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.743 [2024-12-10 12:36:28.799395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.799401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.799407] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.743 [2024-12-10 12:36:28.811564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.743 [2024-12-10 12:36:28.811965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-12-10 12:36:28.811982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.743 [2024-12-10 12:36:28.811989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.743 [2024-12-10 12:36:28.812168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.743 [2024-12-10 12:36:28.812361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.743 [2024-12-10 12:36:28.812370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.812376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.812382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.743 [2024-12-10 12:36:28.824656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.743 [2024-12-10 12:36:28.825061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-12-10 12:36:28.825077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.743 [2024-12-10 12:36:28.825085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.743 [2024-12-10 12:36:28.825265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.743 [2024-12-10 12:36:28.825439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.743 [2024-12-10 12:36:28.825447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.825453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.825460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.743 [2024-12-10 12:36:28.837558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.743 [2024-12-10 12:36:28.837967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-12-10 12:36:28.837982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.743 [2024-12-10 12:36:28.837989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.743 [2024-12-10 12:36:28.838152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.743 [2024-12-10 12:36:28.838346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.743 [2024-12-10 12:36:28.838355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.838361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.838367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.743 [2024-12-10 12:36:28.850433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.743 [2024-12-10 12:36:28.850857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-12-10 12:36:28.850900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.743 [2024-12-10 12:36:28.850922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.743 [2024-12-10 12:36:28.851522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.743 [2024-12-10 12:36:28.852036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.743 [2024-12-10 12:36:28.852044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.852050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.852056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.743 [2024-12-10 12:36:28.863292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.743 [2024-12-10 12:36:28.863705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-12-10 12:36:28.863721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.743 [2024-12-10 12:36:28.863728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.743 [2024-12-10 12:36:28.863900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.743 [2024-12-10 12:36:28.864074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.743 [2024-12-10 12:36:28.864082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.864089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.864094] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.743 [2024-12-10 12:36:28.876188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.743 [2024-12-10 12:36:28.876539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-12-10 12:36:28.876556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.743 [2024-12-10 12:36:28.876566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.743 [2024-12-10 12:36:28.876744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.743 [2024-12-10 12:36:28.876923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.743 [2024-12-10 12:36:28.876931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.876938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.876943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.743 [2024-12-10 12:36:28.888988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.743 [2024-12-10 12:36:28.889379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.743 [2024-12-10 12:36:28.889396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.743 [2024-12-10 12:36:28.889403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.743 [2024-12-10 12:36:28.889575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.743 [2024-12-10 12:36:28.889748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.743 [2024-12-10 12:36:28.889756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.743 [2024-12-10 12:36:28.889763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.743 [2024-12-10 12:36:28.889769] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:06.744 [2024-12-10 12:36:28.901810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:06.744 [2024-12-10 12:36:28.902234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:06.744 [2024-12-10 12:36:28.902251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:06.744 [2024-12-10 12:36:28.902258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:06.744 [2024-12-10 12:36:28.902435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:06.744 [2024-12-10 12:36:28.902615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:06.744 [2024-12-10 12:36:28.902623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:06.744 [2024-12-10 12:36:28.902629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:06.744 [2024-12-10 12:36:28.902636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.003 [2024-12-10 12:36:28.914788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.003 [2024-12-10 12:36:28.915132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.003 [2024-12-10 12:36:28.915148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.003 [2024-12-10 12:36:28.915155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.003 [2024-12-10 12:36:28.915335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.003 [2024-12-10 12:36:28.915511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.003 [2024-12-10 12:36:28.915519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.003 [2024-12-10 12:36:28.915526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.003 [2024-12-10 12:36:28.915532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.003 [2024-12-10 12:36:28.927768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.003 [2024-12-10 12:36:28.928184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.003 [2024-12-10 12:36:28.928201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.003 [2024-12-10 12:36:28.928209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.003 [2024-12-10 12:36:28.928390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.003 [2024-12-10 12:36:28.928554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.003 [2024-12-10 12:36:28.928562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.003 [2024-12-10 12:36:28.928567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.003 [2024-12-10 12:36:28.928573] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.003 [2024-12-10 12:36:28.940669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.003 [2024-12-10 12:36:28.941046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.003 [2024-12-10 12:36:28.941063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.003 [2024-12-10 12:36:28.941070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.003 [2024-12-10 12:36:28.941258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.003 [2024-12-10 12:36:28.941431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.003 [2024-12-10 12:36:28.941439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.003 [2024-12-10 12:36:28.941445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.003 [2024-12-10 12:36:28.941451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.003 [2024-12-10 12:36:28.953541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.003 [2024-12-10 12:36:28.953960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.003 [2024-12-10 12:36:28.953976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.003 [2024-12-10 12:36:28.953983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.003 [2024-12-10 12:36:28.954163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.003 [2024-12-10 12:36:28.954337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.003 [2024-12-10 12:36:28.954345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.003 [2024-12-10 12:36:28.954352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:28.954361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.004 [2024-12-10 12:36:28.966379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.004 [2024-12-10 12:36:28.966748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.004 [2024-12-10 12:36:28.966765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.004 [2024-12-10 12:36:28.966772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.004 [2024-12-10 12:36:28.966934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.004 [2024-12-10 12:36:28.967097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.004 [2024-12-10 12:36:28.967105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.004 [2024-12-10 12:36:28.967110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:28.967116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.004 [2024-12-10 12:36:28.979205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.004 [2024-12-10 12:36:28.979622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.004 [2024-12-10 12:36:28.979638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.004 [2024-12-10 12:36:28.979646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.004 [2024-12-10 12:36:28.979819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.004 [2024-12-10 12:36:28.979996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.004 [2024-12-10 12:36:28.980004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.004 [2024-12-10 12:36:28.980010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:28.980016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.004 [2024-12-10 12:36:28.992080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.004 [2024-12-10 12:36:28.992532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.004 [2024-12-10 12:36:28.992577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.004 [2024-12-10 12:36:28.992599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.004 [2024-12-10 12:36:28.993112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.004 [2024-12-10 12:36:28.993292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.004 [2024-12-10 12:36:28.993301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.004 [2024-12-10 12:36:28.993307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:28.993313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.004 [2024-12-10 12:36:29.004936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.004 [2024-12-10 12:36:29.005366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.004 [2024-12-10 12:36:29.005410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.004 [2024-12-10 12:36:29.005432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.004 [2024-12-10 12:36:29.006014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.004 [2024-12-10 12:36:29.006499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.004 [2024-12-10 12:36:29.006509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.004 [2024-12-10 12:36:29.006515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:29.006521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.004 [2024-12-10 12:36:29.017823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.004 [2024-12-10 12:36:29.018223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.004 [2024-12-10 12:36:29.018268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.004 [2024-12-10 12:36:29.018291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.004 [2024-12-10 12:36:29.018873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.004 [2024-12-10 12:36:29.019271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.004 [2024-12-10 12:36:29.019279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.004 [2024-12-10 12:36:29.019286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:29.019293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.004 [2024-12-10 12:36:29.030768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.004 [2024-12-10 12:36:29.031182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.004 [2024-12-10 12:36:29.031199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.004 [2024-12-10 12:36:29.031207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.004 [2024-12-10 12:36:29.031380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.004 [2024-12-10 12:36:29.031552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.004 [2024-12-10 12:36:29.031560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.004 [2024-12-10 12:36:29.031566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:29.031572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.004 [2024-12-10 12:36:29.043649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.004 [2024-12-10 12:36:29.044024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.004 [2024-12-10 12:36:29.044040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.004 [2024-12-10 12:36:29.044050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.004 [2024-12-10 12:36:29.044238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.004 [2024-12-10 12:36:29.044412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.004 [2024-12-10 12:36:29.044420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.004 [2024-12-10 12:36:29.044426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:29.044432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.004 [2024-12-10 12:36:29.056518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.004 [2024-12-10 12:36:29.056931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.004 [2024-12-10 12:36:29.056948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.004 [2024-12-10 12:36:29.056956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.004 [2024-12-10 12:36:29.057134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.004 [2024-12-10 12:36:29.057319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.004 [2024-12-10 12:36:29.057329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.004 [2024-12-10 12:36:29.057335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:29.057341] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.004 [2024-12-10 12:36:29.069526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.004 [2024-12-10 12:36:29.069993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.004 [2024-12-10 12:36:29.070009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.004 [2024-12-10 12:36:29.070017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.004 [2024-12-10 12:36:29.070196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.004 [2024-12-10 12:36:29.070391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.004 [2024-12-10 12:36:29.070399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.004 [2024-12-10 12:36:29.070405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:29.070412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.004 [2024-12-10 12:36:29.082428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.004 [2024-12-10 12:36:29.082849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.004 [2024-12-10 12:36:29.082865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.004 [2024-12-10 12:36:29.082873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.004 [2024-12-10 12:36:29.083045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.004 [2024-12-10 12:36:29.083231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.004 [2024-12-10 12:36:29.083240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.004 [2024-12-10 12:36:29.083246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.004 [2024-12-10 12:36:29.083252] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.005 [2024-12-10 12:36:29.095232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.005 [2024-12-10 12:36:29.095640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.005 [2024-12-10 12:36:29.095657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.005 [2024-12-10 12:36:29.095664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.005 [2024-12-10 12:36:29.095827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.005 [2024-12-10 12:36:29.095990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.005 [2024-12-10 12:36:29.095998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.005 [2024-12-10 12:36:29.096004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.005 [2024-12-10 12:36:29.096010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.005 [2024-12-10 12:36:29.108125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.005 [2024-12-10 12:36:29.108500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.005 [2024-12-10 12:36:29.108517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.005 [2024-12-10 12:36:29.108525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.005 [2024-12-10 12:36:29.108697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.005 [2024-12-10 12:36:29.108870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.005 [2024-12-10 12:36:29.108879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.005 [2024-12-10 12:36:29.108885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.005 [2024-12-10 12:36:29.108891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.005 [2024-12-10 12:36:29.121008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.005 [2024-12-10 12:36:29.121401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.005 [2024-12-10 12:36:29.121418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.005 [2024-12-10 12:36:29.121425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.005 [2024-12-10 12:36:29.121597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.005 [2024-12-10 12:36:29.121770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.005 [2024-12-10 12:36:29.121778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.005 [2024-12-10 12:36:29.121784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.005 [2024-12-10 12:36:29.121794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.005 [2024-12-10 12:36:29.133837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.005 [2024-12-10 12:36:29.134210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.005 [2024-12-10 12:36:29.134228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.005 [2024-12-10 12:36:29.134235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.005 [2024-12-10 12:36:29.134408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.005 [2024-12-10 12:36:29.134581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.005 [2024-12-10 12:36:29.134591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.005 [2024-12-10 12:36:29.134597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.005 [2024-12-10 12:36:29.134603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.005 [2024-12-10 12:36:29.146832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.005 [2024-12-10 12:36:29.147300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.005 [2024-12-10 12:36:29.147341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.005 [2024-12-10 12:36:29.147365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.005 [2024-12-10 12:36:29.147907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.005 [2024-12-10 12:36:29.148081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.005 [2024-12-10 12:36:29.148090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.005 [2024-12-10 12:36:29.148096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.005 [2024-12-10 12:36:29.148104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.005 [2024-12-10 12:36:29.159997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.005 [2024-12-10 12:36:29.160431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.005 [2024-12-10 12:36:29.160447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.005 [2024-12-10 12:36:29.160455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.005 [2024-12-10 12:36:29.160632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.005 [2024-12-10 12:36:29.160812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.005 [2024-12-10 12:36:29.160821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.005 [2024-12-10 12:36:29.160827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.005 [2024-12-10 12:36:29.160834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.264 [2024-12-10 12:36:29.173009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.264 [2024-12-10 12:36:29.173468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.264 [2024-12-10 12:36:29.173485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.264 [2024-12-10 12:36:29.173492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.264 [2024-12-10 12:36:29.173670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.264 [2024-12-10 12:36:29.173848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.264 [2024-12-10 12:36:29.173857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.173863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.265 [2024-12-10 12:36:29.173869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.265 [2024-12-10 12:36:29.186008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.265 [2024-12-10 12:36:29.186423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.265 [2024-12-10 12:36:29.186440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.265 [2024-12-10 12:36:29.186447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.265 [2024-12-10 12:36:29.186619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.265 [2024-12-10 12:36:29.186797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.265 [2024-12-10 12:36:29.186805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.186812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.265 [2024-12-10 12:36:29.186818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.265 [2024-12-10 12:36:29.198916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.265 [2024-12-10 12:36:29.199281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.265 [2024-12-10 12:36:29.199325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.265 [2024-12-10 12:36:29.199348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.265 [2024-12-10 12:36:29.199854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.265 [2024-12-10 12:36:29.200018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.265 [2024-12-10 12:36:29.200026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.200032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.265 [2024-12-10 12:36:29.200038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.265 [2024-12-10 12:36:29.211850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.265 [2024-12-10 12:36:29.212253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.265 [2024-12-10 12:36:29.212269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.265 [2024-12-10 12:36:29.212280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.265 [2024-12-10 12:36:29.212453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.265 [2024-12-10 12:36:29.212629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.265 [2024-12-10 12:36:29.212638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.212644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.265 [2024-12-10 12:36:29.212650] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.265 [2024-12-10 12:36:29.224972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.265 [2024-12-10 12:36:29.225347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.265 [2024-12-10 12:36:29.225364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.265 [2024-12-10 12:36:29.225372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.265 [2024-12-10 12:36:29.225544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.265 [2024-12-10 12:36:29.225717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.265 [2024-12-10 12:36:29.225726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.225732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.265 [2024-12-10 12:36:29.225738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.265 [2024-12-10 12:36:29.237895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.265 [2024-12-10 12:36:29.238264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.265 [2024-12-10 12:36:29.238281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.265 [2024-12-10 12:36:29.238288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.265 [2024-12-10 12:36:29.238465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.265 [2024-12-10 12:36:29.238628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.265 [2024-12-10 12:36:29.238636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.238642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.265 [2024-12-10 12:36:29.238648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.265 [2024-12-10 12:36:29.250909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.265 [2024-12-10 12:36:29.251269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.265 [2024-12-10 12:36:29.251286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.265 [2024-12-10 12:36:29.251294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.265 [2024-12-10 12:36:29.251465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.265 [2024-12-10 12:36:29.251638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.265 [2024-12-10 12:36:29.251649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.251655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.265 [2024-12-10 12:36:29.251661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.265 [2024-12-10 12:36:29.263849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.265 [2024-12-10 12:36:29.264306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.265 [2024-12-10 12:36:29.264324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.265 [2024-12-10 12:36:29.264332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.265 [2024-12-10 12:36:29.264504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.265 [2024-12-10 12:36:29.264680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.265 [2024-12-10 12:36:29.264689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.264695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.265 [2024-12-10 12:36:29.264701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.265 [2024-12-10 12:36:29.276809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.265 [2024-12-10 12:36:29.277271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.265 [2024-12-10 12:36:29.277288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.265 [2024-12-10 12:36:29.277295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.265 [2024-12-10 12:36:29.277468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.265 [2024-12-10 12:36:29.277640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.265 [2024-12-10 12:36:29.277648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.277654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.265 [2024-12-10 12:36:29.277660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.265 [2024-12-10 12:36:29.289696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.265 [2024-12-10 12:36:29.290115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.265 [2024-12-10 12:36:29.290131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.265 [2024-12-10 12:36:29.290138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.265 [2024-12-10 12:36:29.290315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.265 [2024-12-10 12:36:29.290488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.265 [2024-12-10 12:36:29.290497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.290503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.265 [2024-12-10 12:36:29.290512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.265 [2024-12-10 12:36:29.302887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.265 [2024-12-10 12:36:29.303258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.265 [2024-12-10 12:36:29.303276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.265 [2024-12-10 12:36:29.303284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.265 [2024-12-10 12:36:29.303462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.265 [2024-12-10 12:36:29.303641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.265 [2024-12-10 12:36:29.303649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.265 [2024-12-10 12:36:29.303656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.266 [2024-12-10 12:36:29.303662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.266 [2024-12-10 12:36:29.316047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.266 [2024-12-10 12:36:29.316438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.266 [2024-12-10 12:36:29.316456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.266 [2024-12-10 12:36:29.316464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.266 [2024-12-10 12:36:29.316641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.266 [2024-12-10 12:36:29.316820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.266 [2024-12-10 12:36:29.316828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.266 [2024-12-10 12:36:29.316835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.266 [2024-12-10 12:36:29.316841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.266 [2024-12-10 12:36:29.329126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.266 [2024-12-10 12:36:29.329500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.266 [2024-12-10 12:36:29.329553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.266 [2024-12-10 12:36:29.329576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.266 [2024-12-10 12:36:29.330151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.266 [2024-12-10 12:36:29.330337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.266 [2024-12-10 12:36:29.330346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.266 [2024-12-10 12:36:29.330353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.266 [2024-12-10 12:36:29.330358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.266 [2024-12-10 12:36:29.342148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.266 [2024-12-10 12:36:29.342537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.266 [2024-12-10 12:36:29.342553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.266 [2024-12-10 12:36:29.342560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.266 [2024-12-10 12:36:29.342737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.266 [2024-12-10 12:36:29.342917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.266 [2024-12-10 12:36:29.342925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.266 [2024-12-10 12:36:29.342931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.266 [2024-12-10 12:36:29.342938] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.266 [2024-12-10 12:36:29.355216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.266 [2024-12-10 12:36:29.355566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.266 [2024-12-10 12:36:29.355582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.266 [2024-12-10 12:36:29.355589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.266 [2024-12-10 12:36:29.355761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.266 [2024-12-10 12:36:29.355934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.266 [2024-12-10 12:36:29.355942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.266 [2024-12-10 12:36:29.355948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.266 [2024-12-10 12:36:29.355955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.266 [2024-12-10 12:36:29.368151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.266 [2024-12-10 12:36:29.368445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.266 [2024-12-10 12:36:29.368462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.266 [2024-12-10 12:36:29.368469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.266 [2024-12-10 12:36:29.368642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.266 [2024-12-10 12:36:29.368815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.266 [2024-12-10 12:36:29.368823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.266 [2024-12-10 12:36:29.368830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.266 [2024-12-10 12:36:29.368836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.266 [2024-12-10 12:36:29.381193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.266 [2024-12-10 12:36:29.381592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.266 [2024-12-10 12:36:29.381608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.266 [2024-12-10 12:36:29.381619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.266 [2024-12-10 12:36:29.381791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.266 [2024-12-10 12:36:29.381968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.266 [2024-12-10 12:36:29.381977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.266 [2024-12-10 12:36:29.381983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.266 [2024-12-10 12:36:29.381989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.266 [2024-12-10 12:36:29.394039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.266 [2024-12-10 12:36:29.394397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.266 [2024-12-10 12:36:29.394445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.266 [2024-12-10 12:36:29.394468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.266 [2024-12-10 12:36:29.395051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.266 [2024-12-10 12:36:29.395229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.266 [2024-12-10 12:36:29.395238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.266 [2024-12-10 12:36:29.395244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.266 [2024-12-10 12:36:29.395250] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.266 [2024-12-10 12:36:29.406880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.266 [2024-12-10 12:36:29.407250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.266 [2024-12-10 12:36:29.407267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.266 [2024-12-10 12:36:29.407274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.266 [2024-12-10 12:36:29.407447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.266 [2024-12-10 12:36:29.407619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.266 [2024-12-10 12:36:29.407627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.266 [2024-12-10 12:36:29.407633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.266 [2024-12-10 12:36:29.407639] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.266 [2024-12-10 12:36:29.419834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.266 [2024-12-10 12:36:29.420165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.266 [2024-12-10 12:36:29.420182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.266 [2024-12-10 12:36:29.420189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.266 [2024-12-10 12:36:29.420361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.266 [2024-12-10 12:36:29.420534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.266 [2024-12-10 12:36:29.420546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.266 [2024-12-10 12:36:29.420552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.266 [2024-12-10 12:36:29.420558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.527 [2024-12-10 12:36:29.432831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.527 [2024-12-10 12:36:29.433279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.527 [2024-12-10 12:36:29.433324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.527 [2024-12-10 12:36:29.433347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.527 [2024-12-10 12:36:29.433929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.527 [2024-12-10 12:36:29.434291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.527 [2024-12-10 12:36:29.434300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.527 [2024-12-10 12:36:29.434306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.527 [2024-12-10 12:36:29.434312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.527 [2024-12-10 12:36:29.445654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.527 [2024-12-10 12:36:29.446094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.527 [2024-12-10 12:36:29.446110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.527 [2024-12-10 12:36:29.446117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.527 [2024-12-10 12:36:29.446296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.527 [2024-12-10 12:36:29.446470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.527 [2024-12-10 12:36:29.446478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.527 [2024-12-10 12:36:29.446484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.527 [2024-12-10 12:36:29.446490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.527 [2024-12-10 12:36:29.458686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.527 [2024-12-10 12:36:29.459049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.527 [2024-12-10 12:36:29.459066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.527 [2024-12-10 12:36:29.459073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.527 [2024-12-10 12:36:29.459250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.527 [2024-12-10 12:36:29.459424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.527 [2024-12-10 12:36:29.459432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.527 [2024-12-10 12:36:29.459438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.527 [2024-12-10 12:36:29.459448] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.527 [2024-12-10 12:36:29.471595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.527 [2024-12-10 12:36:29.471884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.527 [2024-12-10 12:36:29.471901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.527 [2024-12-10 12:36:29.471909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.527 [2024-12-10 12:36:29.472081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.527 [2024-12-10 12:36:29.472260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.527 [2024-12-10 12:36:29.472270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.527 [2024-12-10 12:36:29.472276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.527 [2024-12-10 12:36:29.472282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.527 [2024-12-10 12:36:29.484475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.527 [2024-12-10 12:36:29.484812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.527 [2024-12-10 12:36:29.484829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.527 [2024-12-10 12:36:29.484837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.527 [2024-12-10 12:36:29.485009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.527 [2024-12-10 12:36:29.485188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.527 [2024-12-10 12:36:29.485197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.527 [2024-12-10 12:36:29.485204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.527 [2024-12-10 12:36:29.485210] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.527 [2024-12-10 12:36:29.497627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.527 [2024-12-10 12:36:29.497919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.527 [2024-12-10 12:36:29.497936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.527 [2024-12-10 12:36:29.497944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.527 [2024-12-10 12:36:29.498121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.527 [2024-12-10 12:36:29.498307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.527 [2024-12-10 12:36:29.498316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.527 [2024-12-10 12:36:29.498323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.527 [2024-12-10 12:36:29.498329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.527 [2024-12-10 12:36:29.510766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.527 [2024-12-10 12:36:29.511037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.527 [2024-12-10 12:36:29.511053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.527 [2024-12-10 12:36:29.511060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.527 [2024-12-10 12:36:29.511245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.527 [2024-12-10 12:36:29.511424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.527 [2024-12-10 12:36:29.511432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.527 [2024-12-10 12:36:29.511438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.527 [2024-12-10 12:36:29.511444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.527 [2024-12-10 12:36:29.523869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.528 [2024-12-10 12:36:29.524276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.528 [2024-12-10 12:36:29.524294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.528 [2024-12-10 12:36:29.524302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.528 [2024-12-10 12:36:29.524480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.528 [2024-12-10 12:36:29.524659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.528 [2024-12-10 12:36:29.524668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.528 [2024-12-10 12:36:29.524674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.528 [2024-12-10 12:36:29.524680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.528 [2024-12-10 12:36:29.536976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.528 [2024-12-10 12:36:29.537411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.528 [2024-12-10 12:36:29.537456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.528 [2024-12-10 12:36:29.537479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.528 [2024-12-10 12:36:29.537912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.528 [2024-12-10 12:36:29.538091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.528 [2024-12-10 12:36:29.538099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.528 [2024-12-10 12:36:29.538105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.528 [2024-12-10 12:36:29.538111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.528 [2024-12-10 12:36:29.550056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.528 [2024-12-10 12:36:29.550401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.528 [2024-12-10 12:36:29.550418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.528 [2024-12-10 12:36:29.550429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.528 [2024-12-10 12:36:29.550606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.528 [2024-12-10 12:36:29.550786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.528 [2024-12-10 12:36:29.550794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.528 [2024-12-10 12:36:29.550800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.528 [2024-12-10 12:36:29.550807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.528 [2024-12-10 12:36:29.563187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.528 [2024-12-10 12:36:29.563571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.528 [2024-12-10 12:36:29.563588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.528 [2024-12-10 12:36:29.563595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.528 [2024-12-10 12:36:29.563768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.528 [2024-12-10 12:36:29.563942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.528 [2024-12-10 12:36:29.563951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.528 [2024-12-10 12:36:29.563957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.528 [2024-12-10 12:36:29.563963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.528 [2024-12-10 12:36:29.576062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.528 [2024-12-10 12:36:29.576505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.528 [2024-12-10 12:36:29.576522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.528 [2024-12-10 12:36:29.576529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.528 [2024-12-10 12:36:29.576707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.528 [2024-12-10 12:36:29.576885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.528 [2024-12-10 12:36:29.576893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.528 [2024-12-10 12:36:29.576900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.528 [2024-12-10 12:36:29.576906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.528 [2024-12-10 12:36:29.589143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.528 [2024-12-10 12:36:29.589570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.528 [2024-12-10 12:36:29.589587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.528 [2024-12-10 12:36:29.589595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.528 [2024-12-10 12:36:29.589768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.528 [2024-12-10 12:36:29.589945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.528 [2024-12-10 12:36:29.589956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.528 [2024-12-10 12:36:29.589962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.528 [2024-12-10 12:36:29.589968] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.528 [2024-12-10 12:36:29.602257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.528 [2024-12-10 12:36:29.602615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.528 [2024-12-10 12:36:29.602632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.528 [2024-12-10 12:36:29.602639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.528 [2024-12-10 12:36:29.602811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.528 [2024-12-10 12:36:29.602984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.528 [2024-12-10 12:36:29.602992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.528 [2024-12-10 12:36:29.602998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.528 [2024-12-10 12:36:29.603004] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.528 [2024-12-10 12:36:29.615203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.528 [2024-12-10 12:36:29.615659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.528 [2024-12-10 12:36:29.615704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.528 [2024-12-10 12:36:29.615727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.528 [2024-12-10 12:36:29.616324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.528 [2024-12-10 12:36:29.616871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.528 [2024-12-10 12:36:29.616879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.528 [2024-12-10 12:36:29.616885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.528 [2024-12-10 12:36:29.616892] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.528 [2024-12-10 12:36:29.628055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.528 [2024-12-10 12:36:29.628512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.528 [2024-12-10 12:36:29.628556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.528 [2024-12-10 12:36:29.628579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.528 [2024-12-10 12:36:29.629070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.528 [2024-12-10 12:36:29.629250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.528 [2024-12-10 12:36:29.629259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.528 [2024-12-10 12:36:29.629265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.528 [2024-12-10 12:36:29.629274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.528 [2024-12-10 12:36:29.640898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.528 [2024-12-10 12:36:29.641342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.528 [2024-12-10 12:36:29.641392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.528 [2024-12-10 12:36:29.641415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.528 [2024-12-10 12:36:29.641932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.528 [2024-12-10 12:36:29.642105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.528 [2024-12-10 12:36:29.642113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.528 [2024-12-10 12:36:29.642120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.528 [2024-12-10 12:36:29.642126] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.528 [2024-12-10 12:36:29.653756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.529 [2024-12-10 12:36:29.654189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.529 [2024-12-10 12:36:29.654235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.529 [2024-12-10 12:36:29.654257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.529 [2024-12-10 12:36:29.654701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.529 [2024-12-10 12:36:29.654865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.529 [2024-12-10 12:36:29.654873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.529 [2024-12-10 12:36:29.654879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.529 [2024-12-10 12:36:29.654884] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.529 [2024-12-10 12:36:29.666597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.529 [2024-12-10 12:36:29.667022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.529 [2024-12-10 12:36:29.667039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.529 [2024-12-10 12:36:29.667045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.529 [2024-12-10 12:36:29.667231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.529 [2024-12-10 12:36:29.667405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.529 [2024-12-10 12:36:29.667413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.529 [2024-12-10 12:36:29.667419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.529 [2024-12-10 12:36:29.667425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.529 [2024-12-10 12:36:29.679492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.529 [2024-12-10 12:36:29.679917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.529 [2024-12-10 12:36:29.679932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.529 [2024-12-10 12:36:29.679939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.529 [2024-12-10 12:36:29.680101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.529 [2024-12-10 12:36:29.680291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.529 [2024-12-10 12:36:29.680300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.529 [2024-12-10 12:36:29.680306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.529 [2024-12-10 12:36:29.680312] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.790 [2024-12-10 12:36:29.692584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.790 [2024-12-10 12:36:29.693048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.790 [2024-12-10 12:36:29.693091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.790 [2024-12-10 12:36:29.693114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.790 [2024-12-10 12:36:29.693712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.790 [2024-12-10 12:36:29.694167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.790 [2024-12-10 12:36:29.694176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.790 [2024-12-10 12:36:29.694183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.790 [2024-12-10 12:36:29.694188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.790 [2024-12-10 12:36:29.705391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.790 [2024-12-10 12:36:29.705721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.790 [2024-12-10 12:36:29.705766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.790 [2024-12-10 12:36:29.705789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.790 [2024-12-10 12:36:29.706388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.790 [2024-12-10 12:36:29.706975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.790 [2024-12-10 12:36:29.706999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.790 [2024-12-10 12:36:29.707020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.790 [2024-12-10 12:36:29.707039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.790 [2024-12-10 12:36:29.718210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.790 [2024-12-10 12:36:29.718624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.790 [2024-12-10 12:36:29.718640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.790 [2024-12-10 12:36:29.718646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.790 [2024-12-10 12:36:29.718813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.790 [2024-12-10 12:36:29.718976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.790 [2024-12-10 12:36:29.718984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.790 [2024-12-10 12:36:29.718990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.791 [2024-12-10 12:36:29.718995] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.791 [2024-12-10 12:36:29.731211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.791 [2024-12-10 12:36:29.731605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.791 [2024-12-10 12:36:29.731650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.791 [2024-12-10 12:36:29.731673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.791 [2024-12-10 12:36:29.732221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.791 [2024-12-10 12:36:29.732397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.791 [2024-12-10 12:36:29.732407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.791 [2024-12-10 12:36:29.732414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.791 [2024-12-10 12:36:29.732420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.791 7119.00 IOPS, 27.81 MiB/s [2024-12-10T11:36:29.959Z] [2024-12-10 12:36:29.744164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.791 [2024-12-10 12:36:29.744460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.791 [2024-12-10 12:36:29.744476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.791 [2024-12-10 12:36:29.744483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.791 [2024-12-10 12:36:29.744655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.791 [2024-12-10 12:36:29.744831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.791 [2024-12-10 12:36:29.744839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.791 [2024-12-10 12:36:29.744845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.791 [2024-12-10 12:36:29.744851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.791 [2024-12-10 12:36:29.757043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.791 [2024-12-10 12:36:29.757384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.791 [2024-12-10 12:36:29.757401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.791 [2024-12-10 12:36:29.757409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.791 [2024-12-10 12:36:29.757581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.791 [2024-12-10 12:36:29.757758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.791 [2024-12-10 12:36:29.757766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.791 [2024-12-10 12:36:29.757772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.791 [2024-12-10 12:36:29.757778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.791 [2024-12-10 12:36:29.770032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.791 [2024-12-10 12:36:29.770472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.791 [2024-12-10 12:36:29.770488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.791 [2024-12-10 12:36:29.770495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.791 [2024-12-10 12:36:29.770669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.791 [2024-12-10 12:36:29.770845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.791 [2024-12-10 12:36:29.770853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.791 [2024-12-10 12:36:29.770860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.791 [2024-12-10 12:36:29.770866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.791 [2024-12-10 12:36:29.782879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.791 [2024-12-10 12:36:29.783317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.791 [2024-12-10 12:36:29.783362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.791 [2024-12-10 12:36:29.783385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.791 [2024-12-10 12:36:29.783966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.791 [2024-12-10 12:36:29.784436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.791 [2024-12-10 12:36:29.784445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.791 [2024-12-10 12:36:29.784451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.791 [2024-12-10 12:36:29.784457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.791 [2024-12-10 12:36:29.795719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.791 [2024-12-10 12:36:29.796162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.791 [2024-12-10 12:36:29.796179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.791 [2024-12-10 12:36:29.796186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.791 [2024-12-10 12:36:29.796358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.791 [2024-12-10 12:36:29.796531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.791 [2024-12-10 12:36:29.796539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.791 [2024-12-10 12:36:29.796549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.791 [2024-12-10 12:36:29.796555] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.791 [2024-12-10 12:36:29.808654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.791 [2024-12-10 12:36:29.809091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.791 [2024-12-10 12:36:29.809107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.791 [2024-12-10 12:36:29.809114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.791 [2024-12-10 12:36:29.809290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.791 [2024-12-10 12:36:29.809463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.791 [2024-12-10 12:36:29.809471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.791 [2024-12-10 12:36:29.809477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.791 [2024-12-10 12:36:29.809483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.791 [2024-12-10 12:36:29.821518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.791 [2024-12-10 12:36:29.821883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.791 [2024-12-10 12:36:29.821927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.791 [2024-12-10 12:36:29.821949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.791 [2024-12-10 12:36:29.822463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.791 [2024-12-10 12:36:29.822632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.791 [2024-12-10 12:36:29.822640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.791 [2024-12-10 12:36:29.822646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.791 [2024-12-10 12:36:29.822652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.791 [2024-12-10 12:36:29.834419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.791 [2024-12-10 12:36:29.834864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.791 [2024-12-10 12:36:29.834881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.791 [2024-12-10 12:36:29.834889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.791 [2024-12-10 12:36:29.835061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.791 [2024-12-10 12:36:29.835257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.791 [2024-12-10 12:36:29.835266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.791 [2024-12-10 12:36:29.835273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.791 [2024-12-10 12:36:29.835279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.791 [2024-12-10 12:36:29.847525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.791 [2024-12-10 12:36:29.847880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.791 [2024-12-10 12:36:29.847896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.791 [2024-12-10 12:36:29.847903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.792 [2024-12-10 12:36:29.848076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.792 [2024-12-10 12:36:29.848254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.792 [2024-12-10 12:36:29.848262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.792 [2024-12-10 12:36:29.848268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.792 [2024-12-10 12:36:29.848274] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.792 [2024-12-10 12:36:29.860448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.792 [2024-12-10 12:36:29.860870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.792 [2024-12-10 12:36:29.860886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.792 [2024-12-10 12:36:29.860893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.792 [2024-12-10 12:36:29.861057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.792 [2024-12-10 12:36:29.861243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.792 [2024-12-10 12:36:29.861252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.792 [2024-12-10 12:36:29.861258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.792 [2024-12-10 12:36:29.861264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.792 [2024-12-10 12:36:29.873427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.792 [2024-12-10 12:36:29.873870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.792 [2024-12-10 12:36:29.873886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.792 [2024-12-10 12:36:29.873892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.792 [2024-12-10 12:36:29.874056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.792 [2024-12-10 12:36:29.874243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.792 [2024-12-10 12:36:29.874252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.792 [2024-12-10 12:36:29.874258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.792 [2024-12-10 12:36:29.874264] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.792 [2024-12-10 12:36:29.886253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.792 [2024-12-10 12:36:29.886722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.792 [2024-12-10 12:36:29.886765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.792 [2024-12-10 12:36:29.886795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.792 [2024-12-10 12:36:29.887394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.792 [2024-12-10 12:36:29.887581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.792 [2024-12-10 12:36:29.887588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.792 [2024-12-10 12:36:29.887594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.792 [2024-12-10 12:36:29.887600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.792 [2024-12-10 12:36:29.899127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.792 [2024-12-10 12:36:29.899551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.792 [2024-12-10 12:36:29.899568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.792 [2024-12-10 12:36:29.899575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.792 [2024-12-10 12:36:29.899738] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.792 [2024-12-10 12:36:29.899901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.792 [2024-12-10 12:36:29.899909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.792 [2024-12-10 12:36:29.899915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.792 [2024-12-10 12:36:29.899920] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.792 [2024-12-10 12:36:29.912066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.792 [2024-12-10 12:36:29.912525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.792 [2024-12-10 12:36:29.912570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.792 [2024-12-10 12:36:29.912593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.792 [2024-12-10 12:36:29.913197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.792 [2024-12-10 12:36:29.913589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.792 [2024-12-10 12:36:29.913605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.792 [2024-12-10 12:36:29.913619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.792 [2024-12-10 12:36:29.913632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.792 [2024-12-10 12:36:29.926910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.792 [2024-12-10 12:36:29.927443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.792 [2024-12-10 12:36:29.927488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.792 [2024-12-10 12:36:29.927510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.792 [2024-12-10 12:36:29.928093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.792 [2024-12-10 12:36:29.928680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.792 [2024-12-10 12:36:29.928692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.792 [2024-12-10 12:36:29.928701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.792 [2024-12-10 12:36:29.928710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.792 [2024-12-10 12:36:29.939833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.792 [2024-12-10 12:36:29.940273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.792 [2024-12-10 12:36:29.940290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.792 [2024-12-10 12:36:29.940297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.792 [2024-12-10 12:36:29.940470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.792 [2024-12-10 12:36:29.940642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.792 [2024-12-10 12:36:29.940650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.792 [2024-12-10 12:36:29.940656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.792 [2024-12-10 12:36:29.940662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:07.792 [2024-12-10 12:36:29.952961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:07.792 [2024-12-10 12:36:29.953410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:07.792 [2024-12-10 12:36:29.953455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:07.792 [2024-12-10 12:36:29.953478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:07.792 [2024-12-10 12:36:29.954067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:07.792 [2024-12-10 12:36:29.954613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:07.792 [2024-12-10 12:36:29.954623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:07.792 [2024-12-10 12:36:29.954629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:07.792 [2024-12-10 12:36:29.954636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.053 [2024-12-10 12:36:29.965802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.053 [2024-12-10 12:36:29.966263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-12-10 12:36:29.966308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.053 [2024-12-10 12:36:29.966331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.053 [2024-12-10 12:36:29.966761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.053 [2024-12-10 12:36:29.966925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.053 [2024-12-10 12:36:29.966933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.053 [2024-12-10 12:36:29.966939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.053 [2024-12-10 12:36:29.966950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.053 [2024-12-10 12:36:29.978724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.053 [2024-12-10 12:36:29.979063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-12-10 12:36:29.979079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.053 [2024-12-10 12:36:29.979086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.053 [2024-12-10 12:36:29.979275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.053 [2024-12-10 12:36:29.979453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.053 [2024-12-10 12:36:29.979462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.053 [2024-12-10 12:36:29.979468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.053 [2024-12-10 12:36:29.979474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.053 [2024-12-10 12:36:29.991546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.053 [2024-12-10 12:36:29.991968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-12-10 12:36:29.991983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.053 [2024-12-10 12:36:29.991990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.053 [2024-12-10 12:36:29.992154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.053 [2024-12-10 12:36:29.992347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.053 [2024-12-10 12:36:29.992356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.053 [2024-12-10 12:36:29.992362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.053 [2024-12-10 12:36:29.992368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.053 [2024-12-10 12:36:30.005547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.053 [2024-12-10 12:36:30.005892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-12-10 12:36:30.005912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.053 [2024-12-10 12:36:30.005921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.053 [2024-12-10 12:36:30.006117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.053 [2024-12-10 12:36:30.006319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.053 [2024-12-10 12:36:30.006329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.053 [2024-12-10 12:36:30.006337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.053 [2024-12-10 12:36:30.006344] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.053 [2024-12-10 12:36:30.018662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.053 [2024-12-10 12:36:30.019076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-12-10 12:36:30.019093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.053 [2024-12-10 12:36:30.019100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.053 [2024-12-10 12:36:30.019284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.053 [2024-12-10 12:36:30.019464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.053 [2024-12-10 12:36:30.019472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.053 [2024-12-10 12:36:30.019480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.053 [2024-12-10 12:36:30.019487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.053 [2024-12-10 12:36:30.031860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.053 [2024-12-10 12:36:30.032275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.053 [2024-12-10 12:36:30.032293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.053 [2024-12-10 12:36:30.032301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.032479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.054 [2024-12-10 12:36:30.032656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.054 [2024-12-10 12:36:30.032665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.054 [2024-12-10 12:36:30.032671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.054 [2024-12-10 12:36:30.032678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.054 [2024-12-10 12:36:30.044998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.054 [2024-12-10 12:36:30.045406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-12-10 12:36:30.045424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.054 [2024-12-10 12:36:30.045431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.045610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.054 [2024-12-10 12:36:30.045789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.054 [2024-12-10 12:36:30.045797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.054 [2024-12-10 12:36:30.045804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.054 [2024-12-10 12:36:30.045810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.054 [2024-12-10 12:36:30.058146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.054 [2024-12-10 12:36:30.058525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-12-10 12:36:30.058542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.054 [2024-12-10 12:36:30.058553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.058731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.054 [2024-12-10 12:36:30.058910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.054 [2024-12-10 12:36:30.058918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.054 [2024-12-10 12:36:30.058926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.054 [2024-12-10 12:36:30.058932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.054 [2024-12-10 12:36:30.071167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.054 [2024-12-10 12:36:30.071588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-12-10 12:36:30.071605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.054 [2024-12-10 12:36:30.071612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.071785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.054 [2024-12-10 12:36:30.071958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.054 [2024-12-10 12:36:30.071967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.054 [2024-12-10 12:36:30.071973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.054 [2024-12-10 12:36:30.071980] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.054 [2024-12-10 12:36:30.084356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.054 [2024-12-10 12:36:30.084760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-12-10 12:36:30.084807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.054 [2024-12-10 12:36:30.084831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.085428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.054 [2024-12-10 12:36:30.085777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.054 [2024-12-10 12:36:30.085785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.054 [2024-12-10 12:36:30.085792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.054 [2024-12-10 12:36:30.085798] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.054 [2024-12-10 12:36:30.097472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.054 [2024-12-10 12:36:30.097895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-12-10 12:36:30.097912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.054 [2024-12-10 12:36:30.097921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.098099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.054 [2024-12-10 12:36:30.098288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.054 [2024-12-10 12:36:30.098298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.054 [2024-12-10 12:36:30.098304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.054 [2024-12-10 12:36:30.098311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.054 [2024-12-10 12:36:30.110518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.054 [2024-12-10 12:36:30.110942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-12-10 12:36:30.110960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.054 [2024-12-10 12:36:30.110967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.111140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.054 [2024-12-10 12:36:30.111321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.054 [2024-12-10 12:36:30.111330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.054 [2024-12-10 12:36:30.111336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.054 [2024-12-10 12:36:30.111342] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.054 [2024-12-10 12:36:30.123610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.054 [2024-12-10 12:36:30.124038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-12-10 12:36:30.124054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.054 [2024-12-10 12:36:30.124062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.124257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.054 [2024-12-10 12:36:30.124437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.054 [2024-12-10 12:36:30.124446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.054 [2024-12-10 12:36:30.124452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.054 [2024-12-10 12:36:30.124458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.054 [2024-12-10 12:36:30.136621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.054 [2024-12-10 12:36:30.137053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-12-10 12:36:30.137070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.054 [2024-12-10 12:36:30.137077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.137257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.054 [2024-12-10 12:36:30.137430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.054 [2024-12-10 12:36:30.137438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.054 [2024-12-10 12:36:30.137445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.054 [2024-12-10 12:36:30.137454] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.054 [2024-12-10 12:36:30.149567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.054 [2024-12-10 12:36:30.149982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-12-10 12:36:30.149999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.054 [2024-12-10 12:36:30.150006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.150184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.054 [2024-12-10 12:36:30.150358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.054 [2024-12-10 12:36:30.150367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.054 [2024-12-10 12:36:30.150373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.054 [2024-12-10 12:36:30.150379] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.054 [2024-12-10 12:36:30.162490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.054 [2024-12-10 12:36:30.162845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.054 [2024-12-10 12:36:30.162861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.054 [2024-12-10 12:36:30.162868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.054 [2024-12-10 12:36:30.163060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.055 [2024-12-10 12:36:30.163243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.055 [2024-12-10 12:36:30.163252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.055 [2024-12-10 12:36:30.163258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.055 [2024-12-10 12:36:30.163265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.055 [2024-12-10 12:36:30.175453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.055 [2024-12-10 12:36:30.175788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-12-10 12:36:30.175804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.055 [2024-12-10 12:36:30.175812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.055 [2024-12-10 12:36:30.175983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.055 [2024-12-10 12:36:30.176156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.055 [2024-12-10 12:36:30.176171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.055 [2024-12-10 12:36:30.176177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.055 [2024-12-10 12:36:30.176183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.055 [2024-12-10 12:36:30.188401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.055 [2024-12-10 12:36:30.188783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-12-10 12:36:30.188799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.055 [2024-12-10 12:36:30.188806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.055 [2024-12-10 12:36:30.188978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.055 [2024-12-10 12:36:30.189151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.055 [2024-12-10 12:36:30.189165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.055 [2024-12-10 12:36:30.189172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.055 [2024-12-10 12:36:30.189178] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.055 [2024-12-10 12:36:30.201242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.055 [2024-12-10 12:36:30.201610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-12-10 12:36:30.201626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.055 [2024-12-10 12:36:30.201633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.055 [2024-12-10 12:36:30.201806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.055 [2024-12-10 12:36:30.201979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.055 [2024-12-10 12:36:30.201987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.055 [2024-12-10 12:36:30.201993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.055 [2024-12-10 12:36:30.201999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.055 [2024-12-10 12:36:30.214290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.055 [2024-12-10 12:36:30.214578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.055 [2024-12-10 12:36:30.214595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.055 [2024-12-10 12:36:30.214603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.055 [2024-12-10 12:36:30.214780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.055 [2024-12-10 12:36:30.214985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.055 [2024-12-10 12:36:30.214994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.055 [2024-12-10 12:36:30.215000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.055 [2024-12-10 12:36:30.215006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.315 [2024-12-10 12:36:30.227444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.315 [2024-12-10 12:36:30.227911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.315 [2024-12-10 12:36:30.227955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.315 [2024-12-10 12:36:30.227986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.315 [2024-12-10 12:36:30.228281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.315 [2024-12-10 12:36:30.228469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.315 [2024-12-10 12:36:30.228477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.315 [2024-12-10 12:36:30.228483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.315 [2024-12-10 12:36:30.228489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.315 [2024-12-10 12:36:30.240447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.315 [2024-12-10 12:36:30.240862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.315 [2024-12-10 12:36:30.240879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.315 [2024-12-10 12:36:30.240886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.315 [2024-12-10 12:36:30.241059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.315 [2024-12-10 12:36:30.241239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.315 [2024-12-10 12:36:30.241248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.315 [2024-12-10 12:36:30.241254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.315 [2024-12-10 12:36:30.241260] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.315 [2024-12-10 12:36:30.253326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.315 [2024-12-10 12:36:30.253770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.315 [2024-12-10 12:36:30.253787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.315 [2024-12-10 12:36:30.253794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.315 [2024-12-10 12:36:30.253967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.315 [2024-12-10 12:36:30.254139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.315 [2024-12-10 12:36:30.254148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.315 [2024-12-10 12:36:30.254154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.315 [2024-12-10 12:36:30.254167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.315 [2024-12-10 12:36:30.266202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.315 [2024-12-10 12:36:30.266572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.315 [2024-12-10 12:36:30.266588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.315 [2024-12-10 12:36:30.266596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.315 [2024-12-10 12:36:30.266768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.315 [2024-12-10 12:36:30.266944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.315 [2024-12-10 12:36:30.266952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.315 [2024-12-10 12:36:30.266958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.315 [2024-12-10 12:36:30.266965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.315 [2024-12-10 12:36:30.279113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.315 [2024-12-10 12:36:30.279588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.315 [2024-12-10 12:36:30.279633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.315 [2024-12-10 12:36:30.279656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.315 [2024-12-10 12:36:30.280254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.315 [2024-12-10 12:36:30.280678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.315 [2024-12-10 12:36:30.280686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.315 [2024-12-10 12:36:30.280692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.315 [2024-12-10 12:36:30.280698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.315 [2024-12-10 12:36:30.292165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.315 [2024-12-10 12:36:30.292594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.315 [2024-12-10 12:36:30.292609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.315 [2024-12-10 12:36:30.292617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.315 [2024-12-10 12:36:30.292789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.315 [2024-12-10 12:36:30.292961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.315 [2024-12-10 12:36:30.292970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.292976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.316 [2024-12-10 12:36:30.292982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.316 [2024-12-10 12:36:30.305265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.316 [2024-12-10 12:36:30.305674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.316 [2024-12-10 12:36:30.305690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.316 [2024-12-10 12:36:30.305698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.316 [2024-12-10 12:36:30.305871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.316 [2024-12-10 12:36:30.306043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.316 [2024-12-10 12:36:30.306051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.306057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.316 [2024-12-10 12:36:30.306067] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.316 [2024-12-10 12:36:30.318119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.316 [2024-12-10 12:36:30.318541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.316 [2024-12-10 12:36:30.318557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.316 [2024-12-10 12:36:30.318565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.316 [2024-12-10 12:36:30.318737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.316 [2024-12-10 12:36:30.318910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.316 [2024-12-10 12:36:30.318919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.318925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.316 [2024-12-10 12:36:30.318931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.316 [2024-12-10 12:36:30.331132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.316 [2024-12-10 12:36:30.331575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.316 [2024-12-10 12:36:30.331621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.316 [2024-12-10 12:36:30.331645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.316 [2024-12-10 12:36:30.332243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.316 [2024-12-10 12:36:30.332750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.316 [2024-12-10 12:36:30.332758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.332765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.316 [2024-12-10 12:36:30.332771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.316 [2024-12-10 12:36:30.344115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.316 [2024-12-10 12:36:30.344557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.316 [2024-12-10 12:36:30.344574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.316 [2024-12-10 12:36:30.344581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.316 [2024-12-10 12:36:30.344754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.316 [2024-12-10 12:36:30.344926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.316 [2024-12-10 12:36:30.344934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.344941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.316 [2024-12-10 12:36:30.344947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.316 [2024-12-10 12:36:30.357231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.316 [2024-12-10 12:36:30.357583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.316 [2024-12-10 12:36:30.357600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.316 [2024-12-10 12:36:30.357607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.316 [2024-12-10 12:36:30.357780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.316 [2024-12-10 12:36:30.357954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.316 [2024-12-10 12:36:30.357962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.357968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.316 [2024-12-10 12:36:30.357974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.316 [2024-12-10 12:36:30.370232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.316 [2024-12-10 12:36:30.370590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.316 [2024-12-10 12:36:30.370606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.316 [2024-12-10 12:36:30.370613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.316 [2024-12-10 12:36:30.370785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.316 [2024-12-10 12:36:30.370958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.316 [2024-12-10 12:36:30.370966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.370972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.316 [2024-12-10 12:36:30.370979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.316 [2024-12-10 12:36:30.383211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.316 [2024-12-10 12:36:30.383586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.316 [2024-12-10 12:36:30.383602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.316 [2024-12-10 12:36:30.383610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.316 [2024-12-10 12:36:30.383788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.316 [2024-12-10 12:36:30.383967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.316 [2024-12-10 12:36:30.383975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.383982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.316 [2024-12-10 12:36:30.383987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.316 [2024-12-10 12:36:30.396239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.316 [2024-12-10 12:36:30.396622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.316 [2024-12-10 12:36:30.396638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.316 [2024-12-10 12:36:30.396649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.316 [2024-12-10 12:36:30.396821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.316 [2024-12-10 12:36:30.396997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.316 [2024-12-10 12:36:30.397006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.397012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.316 [2024-12-10 12:36:30.397018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.316 [2024-12-10 12:36:30.409253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.316 [2024-12-10 12:36:30.409652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.316 [2024-12-10 12:36:30.409668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.316 [2024-12-10 12:36:30.409676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.316 [2024-12-10 12:36:30.409849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.316 [2024-12-10 12:36:30.410021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.316 [2024-12-10 12:36:30.410029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.410036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.316 [2024-12-10 12:36:30.410041] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.316 [2024-12-10 12:36:30.422332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.316 [2024-12-10 12:36:30.422720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.316 [2024-12-10 12:36:30.422737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.316 [2024-12-10 12:36:30.422744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.316 [2024-12-10 12:36:30.422916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.316 [2024-12-10 12:36:30.423089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.316 [2024-12-10 12:36:30.423098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.316 [2024-12-10 12:36:30.423104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.317 [2024-12-10 12:36:30.423110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.317 [2024-12-10 12:36:30.435277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.317 [2024-12-10 12:36:30.435706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.317 [2024-12-10 12:36:30.435749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.317 [2024-12-10 12:36:30.435772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.317 [2024-12-10 12:36:30.436372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.317 [2024-12-10 12:36:30.436928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.317 [2024-12-10 12:36:30.436936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.317 [2024-12-10 12:36:30.436942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.317 [2024-12-10 12:36:30.436948] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.317 [2024-12-10 12:36:30.448360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.317 [2024-12-10 12:36:30.448762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.317 [2024-12-10 12:36:30.448778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.317 [2024-12-10 12:36:30.448785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.317 [2024-12-10 12:36:30.448958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.317 [2024-12-10 12:36:30.449130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.317 [2024-12-10 12:36:30.449138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.317 [2024-12-10 12:36:30.449144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.317 [2024-12-10 12:36:30.449151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.317 [2024-12-10 12:36:30.461447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.317 [2024-12-10 12:36:30.461816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.317 [2024-12-10 12:36:30.461833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.317 [2024-12-10 12:36:30.461840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.317 [2024-12-10 12:36:30.462014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.317 [2024-12-10 12:36:30.462196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.317 [2024-12-10 12:36:30.462205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.317 [2024-12-10 12:36:30.462212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.317 [2024-12-10 12:36:30.462218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.317 [2024-12-10 12:36:30.474450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.317 [2024-12-10 12:36:30.474874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.317 [2024-12-10 12:36:30.474891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.317 [2024-12-10 12:36:30.474898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.317 [2024-12-10 12:36:30.475076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.317 [2024-12-10 12:36:30.475262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.317 [2024-12-10 12:36:30.475271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.317 [2024-12-10 12:36:30.475277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.317 [2024-12-10 12:36:30.475288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.577 [2024-12-10 12:36:30.487637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.577 [2024-12-10 12:36:30.488101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.577 [2024-12-10 12:36:30.488146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.577 [2024-12-10 12:36:30.488187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.577 [2024-12-10 12:36:30.488769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.577 [2024-12-10 12:36:30.489179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.577 [2024-12-10 12:36:30.489188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.577 [2024-12-10 12:36:30.489195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.577 [2024-12-10 12:36:30.489202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.577 [2024-12-10 12:36:30.502827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.577 [2024-12-10 12:36:30.503328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.577 [2024-12-10 12:36:30.503350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.577 [2024-12-10 12:36:30.503360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.577 [2024-12-10 12:36:30.503614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.577 [2024-12-10 12:36:30.503869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.577 [2024-12-10 12:36:30.503881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.577 [2024-12-10 12:36:30.503890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.577 [2024-12-10 12:36:30.503899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.577 [2024-12-10 12:36:30.515893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.577 [2024-12-10 12:36:30.516306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.577 [2024-12-10 12:36:30.516351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.577 [2024-12-10 12:36:30.516374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.577 [2024-12-10 12:36:30.516956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.577 [2024-12-10 12:36:30.517172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.577 [2024-12-10 12:36:30.517181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.577 [2024-12-10 12:36:30.517187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.577 [2024-12-10 12:36:30.517193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.577 [2024-12-10 12:36:30.528939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.577 [2024-12-10 12:36:30.529354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.577 [2024-12-10 12:36:30.529370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.577 [2024-12-10 12:36:30.529378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.577 [2024-12-10 12:36:30.529555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.577 [2024-12-10 12:36:30.529734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.577 [2024-12-10 12:36:30.529742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.577 [2024-12-10 12:36:30.529748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.577 [2024-12-10 12:36:30.529754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.577 [2024-12-10 12:36:30.542006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.577 [2024-12-10 12:36:30.542416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.577 [2024-12-10 12:36:30.542433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.577 [2024-12-10 12:36:30.542441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.577 [2024-12-10 12:36:30.542619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.577 [2024-12-10 12:36:30.542797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.577 [2024-12-10 12:36:30.542806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.577 [2024-12-10 12:36:30.542812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.577 [2024-12-10 12:36:30.542819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.577 [2024-12-10 12:36:30.555100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.577 [2024-12-10 12:36:30.555534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.577 [2024-12-10 12:36:30.555551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.577 [2024-12-10 12:36:30.555558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.577 [2024-12-10 12:36:30.555736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.577 [2024-12-10 12:36:30.555914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.577 [2024-12-10 12:36:30.555923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.577 [2024-12-10 12:36:30.555929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.577 [2024-12-10 12:36:30.555936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.577 [2024-12-10 12:36:30.568222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.577 [2024-12-10 12:36:30.568574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.577 [2024-12-10 12:36:30.568591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.577 [2024-12-10 12:36:30.568601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.577 [2024-12-10 12:36:30.568780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.577 [2024-12-10 12:36:30.568959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.577 [2024-12-10 12:36:30.568968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.577 [2024-12-10 12:36:30.568975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.577 [2024-12-10 12:36:30.568981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.577 [2024-12-10 12:36:30.581431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.577 [2024-12-10 12:36:30.581754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.577 [2024-12-10 12:36:30.581771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.577 [2024-12-10 12:36:30.581779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.577 [2024-12-10 12:36:30.581958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.577 [2024-12-10 12:36:30.582137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.577 [2024-12-10 12:36:30.582145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.577 [2024-12-10 12:36:30.582152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.577 [2024-12-10 12:36:30.582172] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.594638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.578 [2024-12-10 12:36:30.595046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.578 [2024-12-10 12:36:30.595062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.578 [2024-12-10 12:36:30.595069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.578 [2024-12-10 12:36:30.595252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.578 [2024-12-10 12:36:30.595431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.578 [2024-12-10 12:36:30.595439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.578 [2024-12-10 12:36:30.595446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.578 [2024-12-10 12:36:30.595452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.607715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.578 [2024-12-10 12:36:30.608126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.578 [2024-12-10 12:36:30.608183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.578 [2024-12-10 12:36:30.608207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.578 [2024-12-10 12:36:30.608791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.578 [2024-12-10 12:36:30.609332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.578 [2024-12-10 12:36:30.609346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.578 [2024-12-10 12:36:30.609353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.578 [2024-12-10 12:36:30.609358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.620786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.578 [2024-12-10 12:36:30.621138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.578 [2024-12-10 12:36:30.621155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.578 [2024-12-10 12:36:30.621170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.578 [2024-12-10 12:36:30.621343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.578 [2024-12-10 12:36:30.621517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.578 [2024-12-10 12:36:30.621526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.578 [2024-12-10 12:36:30.621532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.578 [2024-12-10 12:36:30.621538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.633898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.578 [2024-12-10 12:36:30.634315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.578 [2024-12-10 12:36:30.634333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.578 [2024-12-10 12:36:30.634340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.578 [2024-12-10 12:36:30.634513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.578 [2024-12-10 12:36:30.634684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.578 [2024-12-10 12:36:30.634693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.578 [2024-12-10 12:36:30.634699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.578 [2024-12-10 12:36:30.634705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.646818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.578 [2024-12-10 12:36:30.647337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.578 [2024-12-10 12:36:30.647382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.578 [2024-12-10 12:36:30.647405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.578 [2024-12-10 12:36:30.647988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.578 [2024-12-10 12:36:30.648523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.578 [2024-12-10 12:36:30.648531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.578 [2024-12-10 12:36:30.648537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.578 [2024-12-10 12:36:30.648547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.659752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.578 [2024-12-10 12:36:30.660122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.578 [2024-12-10 12:36:30.660138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.578 [2024-12-10 12:36:30.660145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.578 [2024-12-10 12:36:30.660323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.578 [2024-12-10 12:36:30.660497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.578 [2024-12-10 12:36:30.660505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.578 [2024-12-10 12:36:30.660511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.578 [2024-12-10 12:36:30.660517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.672651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.578 [2024-12-10 12:36:30.673098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.578 [2024-12-10 12:36:30.673140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.578 [2024-12-10 12:36:30.673178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.578 [2024-12-10 12:36:30.673624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.578 [2024-12-10 12:36:30.673798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.578 [2024-12-10 12:36:30.673806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.578 [2024-12-10 12:36:30.673812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.578 [2024-12-10 12:36:30.673818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.685459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.578 [2024-12-10 12:36:30.685855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.578 [2024-12-10 12:36:30.685871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.578 [2024-12-10 12:36:30.685878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.578 [2024-12-10 12:36:30.686051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.578 [2024-12-10 12:36:30.686228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.578 [2024-12-10 12:36:30.686237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.578 [2024-12-10 12:36:30.686243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.578 [2024-12-10 12:36:30.686249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.698537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.578 [2024-12-10 12:36:30.698942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.578 [2024-12-10 12:36:30.698957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.578 [2024-12-10 12:36:30.698964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.578 [2024-12-10 12:36:30.699136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.578 [2024-12-10 12:36:30.699315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.578 [2024-12-10 12:36:30.699324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.578 [2024-12-10 12:36:30.699330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.578 [2024-12-10 12:36:30.699336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.711408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.578 [2024-12-10 12:36:30.711749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.578 [2024-12-10 12:36:30.711765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.578 [2024-12-10 12:36:30.711772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.578 [2024-12-10 12:36:30.711945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.578 [2024-12-10 12:36:30.712118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.578 [2024-12-10 12:36:30.712125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.578 [2024-12-10 12:36:30.712131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.578 [2024-12-10 12:36:30.712137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.578 [2024-12-10 12:36:30.724369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.579 [2024-12-10 12:36:30.724847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.579 [2024-12-10 12:36:30.724863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.579 [2024-12-10 12:36:30.724870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.579 [2024-12-10 12:36:30.725043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.579 [2024-12-10 12:36:30.725221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.579 [2024-12-10 12:36:30.725230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.579 [2024-12-10 12:36:30.725236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.579 [2024-12-10 12:36:30.725242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.579 5695.20 IOPS, 22.25 MiB/s [2024-12-10T11:36:30.747Z] [2024-12-10 12:36:30.738587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.579 [2024-12-10 12:36:30.739055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.579 [2024-12-10 12:36:30.739072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.579 [2024-12-10 12:36:30.739083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.579 [2024-12-10 12:36:30.739268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.579 [2024-12-10 12:36:30.739451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.579 [2024-12-10 12:36:30.739459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.579 [2024-12-10 12:36:30.739466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.579 [2024-12-10 12:36:30.739472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.838 [2024-12-10 12:36:30.751529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.838 [2024-12-10 12:36:30.751900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.838 [2024-12-10 12:36:30.751943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.838 [2024-12-10 12:36:30.751966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.838 [2024-12-10 12:36:30.752446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.838 [2024-12-10 12:36:30.752622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.838 [2024-12-10 12:36:30.752630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.838 [2024-12-10 12:36:30.752637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.838 [2024-12-10 12:36:30.752642] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.838 [2024-12-10 12:36:30.764388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.838 [2024-12-10 12:36:30.764675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.838 [2024-12-10 12:36:30.764691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.838 [2024-12-10 12:36:30.764698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.838 [2024-12-10 12:36:30.764870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.838 [2024-12-10 12:36:30.765042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.838 [2024-12-10 12:36:30.765051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.838 [2024-12-10 12:36:30.765058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.838 [2024-12-10 12:36:30.765064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.838 [2024-12-10 12:36:30.777416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.838 [2024-12-10 12:36:30.777847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.838 [2024-12-10 12:36:30.777863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.838 [2024-12-10 12:36:30.777871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.838 [2024-12-10 12:36:30.778043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.838 [2024-12-10 12:36:30.778229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.838 [2024-12-10 12:36:30.778239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.838 [2024-12-10 12:36:30.778245] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.838 [2024-12-10 12:36:30.778251] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.838 [2024-12-10 12:36:30.790494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.838 [2024-12-10 12:36:30.790881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.838 [2024-12-10 12:36:30.790925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.838 [2024-12-10 12:36:30.790947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.838 [2024-12-10 12:36:30.791484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.838 [2024-12-10 12:36:30.791659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.838 [2024-12-10 12:36:30.791667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.838 [2024-12-10 12:36:30.791673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.838 [2024-12-10 12:36:30.791679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.838 [2024-12-10 12:36:30.803446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.838 [2024-12-10 12:36:30.803784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.838 [2024-12-10 12:36:30.803800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.838 [2024-12-10 12:36:30.803807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.838 [2024-12-10 12:36:30.803979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.838 [2024-12-10 12:36:30.804156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.838 [2024-12-10 12:36:30.804172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.838 [2024-12-10 12:36:30.804178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.838 [2024-12-10 12:36:30.804184] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.838 [2024-12-10 12:36:30.816484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.838 [2024-12-10 12:36:30.816979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.838 [2024-12-10 12:36:30.817021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.838 [2024-12-10 12:36:30.817044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.838 [2024-12-10 12:36:30.817618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.838 [2024-12-10 12:36:30.817793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.838 [2024-12-10 12:36:30.817801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.838 [2024-12-10 12:36:30.817811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.838 [2024-12-10 12:36:30.817817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.838 [2024-12-10 12:36:30.829324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.838 [2024-12-10 12:36:30.829699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.838 [2024-12-10 12:36:30.829714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.838 [2024-12-10 12:36:30.829721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.838 [2024-12-10 12:36:30.829894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.838 [2024-12-10 12:36:30.830065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.838 [2024-12-10 12:36:30.830073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.838 [2024-12-10 12:36:30.830080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.838 [2024-12-10 12:36:30.830086] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.838 [2024-12-10 12:36:30.842314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.838 [2024-12-10 12:36:30.842613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.838 [2024-12-10 12:36:30.842660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.838 [2024-12-10 12:36:30.842684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.838 [2024-12-10 12:36:30.843280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.838 [2024-12-10 12:36:30.843869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.838 [2024-12-10 12:36:30.843899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.838 [2024-12-10 12:36:30.843905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.838 [2024-12-10 12:36:30.843912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.838 [2024-12-10 12:36:30.855256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.838 [2024-12-10 12:36:30.855608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.838 [2024-12-10 12:36:30.855624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.838 [2024-12-10 12:36:30.855631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.838 [2024-12-10 12:36:30.855803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.838 [2024-12-10 12:36:30.855976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.838 [2024-12-10 12:36:30.855984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.838 [2024-12-10 12:36:30.855990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.839 [2024-12-10 12:36:30.855996] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.839 [2024-12-10 12:36:30.868414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.839 [2024-12-10 12:36:30.868752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.839 [2024-12-10 12:36:30.868768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.839 [2024-12-10 12:36:30.868776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.839 [2024-12-10 12:36:30.868948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.839 [2024-12-10 12:36:30.869122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.839 [2024-12-10 12:36:30.869131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.839 [2024-12-10 12:36:30.869137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.839 [2024-12-10 12:36:30.869143] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.839 [2024-12-10 12:36:30.881442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.839 [2024-12-10 12:36:30.881843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.839 [2024-12-10 12:36:30.881885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.839 [2024-12-10 12:36:30.881908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.839 [2024-12-10 12:36:30.882504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.839 [2024-12-10 12:36:30.882997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.839 [2024-12-10 12:36:30.883005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.839 [2024-12-10 12:36:30.883011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.839 [2024-12-10 12:36:30.883018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.839 [2024-12-10 12:36:30.894360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.839 [2024-12-10 12:36:30.894732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.839 [2024-12-10 12:36:30.894776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.839 [2024-12-10 12:36:30.894799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.839 [2024-12-10 12:36:30.895396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.839 [2024-12-10 12:36:30.895967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.839 [2024-12-10 12:36:30.895975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.839 [2024-12-10 12:36:30.895981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.839 [2024-12-10 12:36:30.895987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.839 [2024-12-10 12:36:30.907234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.839 [2024-12-10 12:36:30.907637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.839 [2024-12-10 12:36:30.907680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.839 [2024-12-10 12:36:30.907710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.839 [2024-12-10 12:36:30.908190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.839 [2024-12-10 12:36:30.908364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.839 [2024-12-10 12:36:30.908372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.839 [2024-12-10 12:36:30.908378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.839 [2024-12-10 12:36:30.908384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.839 [2024-12-10 12:36:30.920120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.839 [2024-12-10 12:36:30.920492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.839 [2024-12-10 12:36:30.920537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.839 [2024-12-10 12:36:30.920560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.839 [2024-12-10 12:36:30.921031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.839 [2024-12-10 12:36:30.921210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.839 [2024-12-10 12:36:30.921219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.839 [2024-12-10 12:36:30.921225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.839 [2024-12-10 12:36:30.921231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.839 [2024-12-10 12:36:30.933040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.839 [2024-12-10 12:36:30.933464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.839 [2024-12-10 12:36:30.933494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.839 [2024-12-10 12:36:30.933518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.839 [2024-12-10 12:36:30.934100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.839 [2024-12-10 12:36:30.934517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.839 [2024-12-10 12:36:30.934535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.839 [2024-12-10 12:36:30.934548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.839 [2024-12-10 12:36:30.934561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.839 [2024-12-10 12:36:30.947857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.839 [2024-12-10 12:36:30.948288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.839 [2024-12-10 12:36:30.948311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.839 [2024-12-10 12:36:30.948321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.839 [2024-12-10 12:36:30.948575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.839 [2024-12-10 12:36:30.948835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.839 [2024-12-10 12:36:30.948847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.839 [2024-12-10 12:36:30.948856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.839 [2024-12-10 12:36:30.948865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.839 [2024-12-10 12:36:30.960877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.839 [2024-12-10 12:36:30.961272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.839 [2024-12-10 12:36:30.961289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.839 [2024-12-10 12:36:30.961296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.839 [2024-12-10 12:36:30.961468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.839 [2024-12-10 12:36:30.961641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.839 [2024-12-10 12:36:30.961649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.839 [2024-12-10 12:36:30.961655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.839 [2024-12-10 12:36:30.961661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.839 [2024-12-10 12:36:30.973784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.839 [2024-12-10 12:36:30.974219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.839 [2024-12-10 12:36:30.974265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.839 [2024-12-10 12:36:30.974288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.839 [2024-12-10 12:36:30.974869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.839 [2024-12-10 12:36:30.975131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.839 [2024-12-10 12:36:30.975139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.839 [2024-12-10 12:36:30.975145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.839 [2024-12-10 12:36:30.975151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.839 [2024-12-10 12:36:30.986616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.839 [2024-12-10 12:36:30.987040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.839 [2024-12-10 12:36:30.987057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.839 [2024-12-10 12:36:30.987064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.839 [2024-12-10 12:36:30.987244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.839 [2024-12-10 12:36:30.987417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.839 [2024-12-10 12:36:30.987425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.839 [2024-12-10 12:36:30.987434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.840 [2024-12-10 12:36:30.987441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:08.840 [2024-12-10 12:36:30.999650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:08.840 [2024-12-10 12:36:31.000064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:08.840 [2024-12-10 12:36:31.000080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:08.840 [2024-12-10 12:36:31.000088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:08.840 [2024-12-10 12:36:31.000272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:08.840 [2024-12-10 12:36:31.000451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:08.840 [2024-12-10 12:36:31.000459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:08.840 [2024-12-10 12:36:31.000466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:08.840 [2024-12-10 12:36:31.000472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.099 [2024-12-10 12:36:31.012617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.099 [2024-12-10 12:36:31.013050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.099 [2024-12-10 12:36:31.013094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.099 [2024-12-10 12:36:31.013118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.099 [2024-12-10 12:36:31.013713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.099 [2024-12-10 12:36:31.014272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.099 [2024-12-10 12:36:31.014280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.099 [2024-12-10 12:36:31.014286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.099 [2024-12-10 12:36:31.014293] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.099 [2024-12-10 12:36:31.025534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.099 [2024-12-10 12:36:31.025959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.099 [2024-12-10 12:36:31.025975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.099 [2024-12-10 12:36:31.025982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.099 [2024-12-10 12:36:31.026155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.099 [2024-12-10 12:36:31.026336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.099 [2024-12-10 12:36:31.026344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.099 [2024-12-10 12:36:31.026350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.099 [2024-12-10 12:36:31.026356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.099 [2024-12-10 12:36:31.038470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.099 [2024-12-10 12:36:31.038888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.099 [2024-12-10 12:36:31.038904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.099 [2024-12-10 12:36:31.038911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.099 [2024-12-10 12:36:31.039083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.099 [2024-12-10 12:36:31.039263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.099 [2024-12-10 12:36:31.039272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.099 [2024-12-10 12:36:31.039278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.099 [2024-12-10 12:36:31.039284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.099 [2024-12-10 12:36:31.051354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.099 [2024-12-10 12:36:31.051726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.099 [2024-12-10 12:36:31.051741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.099 [2024-12-10 12:36:31.051748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.099 [2024-12-10 12:36:31.051911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.099 [2024-12-10 12:36:31.052074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.099 [2024-12-10 12:36:31.052082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.099 [2024-12-10 12:36:31.052088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.099 [2024-12-10 12:36:31.052093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.099 [2024-12-10 12:36:31.064225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.099 [2024-12-10 12:36:31.064648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.099 [2024-12-10 12:36:31.064664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.099 [2024-12-10 12:36:31.064671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.099 [2024-12-10 12:36:31.064844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.099 [2024-12-10 12:36:31.065020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.099 [2024-12-10 12:36:31.065028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.099 [2024-12-10 12:36:31.065034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.065040] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.077132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.100 [2024-12-10 12:36:31.077561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.100 [2024-12-10 12:36:31.077606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.100 [2024-12-10 12:36:31.077637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.100 [2024-12-10 12:36:31.078182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.100 [2024-12-10 12:36:31.078356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.100 [2024-12-10 12:36:31.078364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.100 [2024-12-10 12:36:31.078371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.078377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.089986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.100 [2024-12-10 12:36:31.090385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.100 [2024-12-10 12:36:31.090402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.100 [2024-12-10 12:36:31.090409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.100 [2024-12-10 12:36:31.090581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.100 [2024-12-10 12:36:31.090754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.100 [2024-12-10 12:36:31.090762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.100 [2024-12-10 12:36:31.090769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.090775] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.102853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.100 [2024-12-10 12:36:31.103279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.100 [2024-12-10 12:36:31.103323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.100 [2024-12-10 12:36:31.103346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.100 [2024-12-10 12:36:31.103928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.100 [2024-12-10 12:36:31.104476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.100 [2024-12-10 12:36:31.104484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.100 [2024-12-10 12:36:31.104490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.104496] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.115660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.100 [2024-12-10 12:36:31.116069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.100 [2024-12-10 12:36:31.116085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.100 [2024-12-10 12:36:31.116093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.100 [2024-12-10 12:36:31.116272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.100 [2024-12-10 12:36:31.116449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.100 [2024-12-10 12:36:31.116457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.100 [2024-12-10 12:36:31.116463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.116469] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.128694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.100 [2024-12-10 12:36:31.129107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.100 [2024-12-10 12:36:31.129151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.100 [2024-12-10 12:36:31.129188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.100 [2024-12-10 12:36:31.129579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.100 [2024-12-10 12:36:31.129753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.100 [2024-12-10 12:36:31.129761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.100 [2024-12-10 12:36:31.129767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.129773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.141569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.100 [2024-12-10 12:36:31.141963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.100 [2024-12-10 12:36:31.141979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.100 [2024-12-10 12:36:31.141986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.100 [2024-12-10 12:36:31.142148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.100 [2024-12-10 12:36:31.142340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.100 [2024-12-10 12:36:31.142349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.100 [2024-12-10 12:36:31.142355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.142361] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.154453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.100 [2024-12-10 12:36:31.154876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.100 [2024-12-10 12:36:31.154920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.100 [2024-12-10 12:36:31.154942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.100 [2024-12-10 12:36:31.155474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.100 [2024-12-10 12:36:31.155647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.100 [2024-12-10 12:36:31.155655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.100 [2024-12-10 12:36:31.155665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.155672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.167385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.100 [2024-12-10 12:36:31.167777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.100 [2024-12-10 12:36:31.167793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.100 [2024-12-10 12:36:31.167799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.100 [2024-12-10 12:36:31.167962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.100 [2024-12-10 12:36:31.168126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.100 [2024-12-10 12:36:31.168133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.100 [2024-12-10 12:36:31.168139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.168145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.180242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.100 [2024-12-10 12:36:31.180656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.100 [2024-12-10 12:36:31.180672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.100 [2024-12-10 12:36:31.180679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.100 [2024-12-10 12:36:31.180851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.100 [2024-12-10 12:36:31.181023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.100 [2024-12-10 12:36:31.181031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.100 [2024-12-10 12:36:31.181037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.181043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.193071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.100 [2024-12-10 12:36:31.193499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.100 [2024-12-10 12:36:31.193544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.100 [2024-12-10 12:36:31.193567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.100 [2024-12-10 12:36:31.194149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.100 [2024-12-10 12:36:31.194716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.100 [2024-12-10 12:36:31.194724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.100 [2024-12-10 12:36:31.194730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.100 [2024-12-10 12:36:31.194736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.100 [2024-12-10 12:36:31.205900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.101 [2024-12-10 12:36:31.206334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.101 [2024-12-10 12:36:31.206378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.101 [2024-12-10 12:36:31.206400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.101 [2024-12-10 12:36:31.206798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.101 [2024-12-10 12:36:31.206971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.101 [2024-12-10 12:36:31.206979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.101 [2024-12-10 12:36:31.206985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.101 [2024-12-10 12:36:31.206991] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.101 [2024-12-10 12:36:31.220909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.101 [2024-12-10 12:36:31.221418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.101 [2024-12-10 12:36:31.221463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.101 [2024-12-10 12:36:31.221486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.101 [2024-12-10 12:36:31.221976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.101 [2024-12-10 12:36:31.222240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.101 [2024-12-10 12:36:31.222252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.101 [2024-12-10 12:36:31.222261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.101 [2024-12-10 12:36:31.222270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.101 [2024-12-10 12:36:31.233937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.101 [2024-12-10 12:36:31.234366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.101 [2024-12-10 12:36:31.234383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.101 [2024-12-10 12:36:31.234390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.101 [2024-12-10 12:36:31.234563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.101 [2024-12-10 12:36:31.234736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.101 [2024-12-10 12:36:31.234744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.101 [2024-12-10 12:36:31.234750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.101 [2024-12-10 12:36:31.234756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.101 [2024-12-10 12:36:31.246885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.101 [2024-12-10 12:36:31.247305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.101 [2024-12-10 12:36:31.247333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.101 [2024-12-10 12:36:31.247346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.101 [2024-12-10 12:36:31.247510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.101 [2024-12-10 12:36:31.247673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.101 [2024-12-10 12:36:31.247681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.101 [2024-12-10 12:36:31.247687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.101 [2024-12-10 12:36:31.247692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.101 [2024-12-10 12:36:31.260000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.101 [2024-12-10 12:36:31.260416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.101 [2024-12-10 12:36:31.260434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.101 [2024-12-10 12:36:31.260442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.101 [2024-12-10 12:36:31.260619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.101 [2024-12-10 12:36:31.260797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.101 [2024-12-10 12:36:31.260805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.101 [2024-12-10 12:36:31.260811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.101 [2024-12-10 12:36:31.260818] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.362 [2024-12-10 12:36:31.273032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.362 [2024-12-10 12:36:31.273446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.362 [2024-12-10 12:36:31.273463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.362 [2024-12-10 12:36:31.273470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.362 [2024-12-10 12:36:31.273643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.362 [2024-12-10 12:36:31.273815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.362 [2024-12-10 12:36:31.273823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.362 [2024-12-10 12:36:31.273829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.362 [2024-12-10 12:36:31.273835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh: line 35: 1783868 Killed "${NVMF_APP[@]}" "$@" 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1785265 00:28:09.362 [2024-12-10 12:36:31.286269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1785265 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1785265 ']' 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.362 [2024-12-10 12:36:31.286733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.362 [2024-12-10 12:36:31.286758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.362 [2024-12-10 12:36:31.286771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.362 [2024-12-10 12:36:31.286977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.362 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.362 [2024-12-10 12:36:31.287190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.362 [2024-12-10 12:36:31.287206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.362 [2024-12-10 12:36:31.287217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.362 [2024-12-10 12:36:31.287228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.362 [2024-12-10 12:36:31.299312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.362 [2024-12-10 12:36:31.299769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.362 [2024-12-10 12:36:31.299786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.362 [2024-12-10 12:36:31.299795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.362 [2024-12-10 12:36:31.299973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.362 [2024-12-10 12:36:31.300152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.362 [2024-12-10 12:36:31.300169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.362 [2024-12-10 12:36:31.300176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.363 [2024-12-10 12:36:31.300183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.363 [2024-12-10 12:36:31.312304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.363 [2024-12-10 12:36:31.312735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.363 [2024-12-10 12:36:31.312752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.363 [2024-12-10 12:36:31.312760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.363 [2024-12-10 12:36:31.312935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.363 [2024-12-10 12:36:31.313109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.363 [2024-12-10 12:36:31.313117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.363 [2024-12-10 12:36:31.313128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.363 [2024-12-10 12:36:31.313134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.363 [2024-12-10 12:36:31.325381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.363 [2024-12-10 12:36:31.325837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.363 [2024-12-10 12:36:31.325853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.363 [2024-12-10 12:36:31.325861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.363 [2024-12-10 12:36:31.326038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.363 [2024-12-10 12:36:31.326230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.363 [2024-12-10 12:36:31.326240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.363 [2024-12-10 12:36:31.326246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.363 [2024-12-10 12:36:31.326253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.363 [2024-12-10 12:36:31.333618] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:09.363 [2024-12-10 12:36:31.333655] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.363 [2024-12-10 12:36:31.338503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.363 [2024-12-10 12:36:31.338933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.363 [2024-12-10 12:36:31.338950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.363 [2024-12-10 12:36:31.338958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.363 [2024-12-10 12:36:31.339151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.363 [2024-12-10 12:36:31.339336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.363 [2024-12-10 12:36:31.339345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.363 [2024-12-10 12:36:31.339351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.363 [2024-12-10 12:36:31.339357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.363 [2024-12-10 12:36:31.351584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.363 [2024-12-10 12:36:31.352013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.363 [2024-12-10 12:36:31.352029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.363 [2024-12-10 12:36:31.352037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.363 [2024-12-10 12:36:31.352219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.363 [2024-12-10 12:36:31.352413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.363 [2024-12-10 12:36:31.352422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.363 [2024-12-10 12:36:31.352428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.363 [2024-12-10 12:36:31.352434] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.363 [2024-12-10 12:36:31.364738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.363 [2024-12-10 12:36:31.365088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.363 [2024-12-10 12:36:31.365105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.363 [2024-12-10 12:36:31.365112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.363 [2024-12-10 12:36:31.365298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.363 [2024-12-10 12:36:31.365477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.363 [2024-12-10 12:36:31.365485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.363 [2024-12-10 12:36:31.365492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.363 [2024-12-10 12:36:31.365498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.363 [2024-12-10 12:36:31.377947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.363 [2024-12-10 12:36:31.378398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.363 [2024-12-10 12:36:31.378415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.363 [2024-12-10 12:36:31.378423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.363 [2024-12-10 12:36:31.378601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.363 [2024-12-10 12:36:31.378781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.363 [2024-12-10 12:36:31.378792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.363 [2024-12-10 12:36:31.378798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.363 [2024-12-10 12:36:31.378805] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.363 [2024-12-10 12:36:31.390975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.363 [2024-12-10 12:36:31.391415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.363 [2024-12-10 12:36:31.391432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.363 [2024-12-10 12:36:31.391441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.363 [2024-12-10 12:36:31.391615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.363 [2024-12-10 12:36:31.391790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.363 [2024-12-10 12:36:31.391801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.363 [2024-12-10 12:36:31.391808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.363 [2024-12-10 12:36:31.391815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.363 [2024-12-10 12:36:31.404042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.363 [2024-12-10 12:36:31.404483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.363 [2024-12-10 12:36:31.404501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.363 [2024-12-10 12:36:31.404509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.363 [2024-12-10 12:36:31.404683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.363 [2024-12-10 12:36:31.404857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.363 [2024-12-10 12:36:31.404867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.363 [2024-12-10 12:36:31.404874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.363 [2024-12-10 12:36:31.404881] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.363 [2024-12-10 12:36:31.415223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:09.363 [2024-12-10 12:36:31.417160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.363 [2024-12-10 12:36:31.417567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.363 [2024-12-10 12:36:31.417584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.363 [2024-12-10 12:36:31.417592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.363 [2024-12-10 12:36:31.417765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.363 [2024-12-10 12:36:31.417944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.363 [2024-12-10 12:36:31.417953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.363 [2024-12-10 12:36:31.417959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.363 [2024-12-10 12:36:31.417965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.363 [2024-12-10 12:36:31.430228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.363 [2024-12-10 12:36:31.430621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.363 [2024-12-10 12:36:31.430639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.363 [2024-12-10 12:36:31.430647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.363 [2024-12-10 12:36:31.430821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.364 [2024-12-10 12:36:31.430994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.364 [2024-12-10 12:36:31.431002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.364 [2024-12-10 12:36:31.431009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.364 [2024-12-10 12:36:31.431019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.364 [2024-12-10 12:36:31.443389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.364 [2024-12-10 12:36:31.443825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.364 [2024-12-10 12:36:31.443842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.364 [2024-12-10 12:36:31.443850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.364 [2024-12-10 12:36:31.444023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.364 [2024-12-10 12:36:31.444202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.364 [2024-12-10 12:36:31.444212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.364 [2024-12-10 12:36:31.444219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.364 [2024-12-10 12:36:31.444226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.364 [2024-12-10 12:36:31.455720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.364 [2024-12-10 12:36:31.455746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.364 [2024-12-10 12:36:31.455753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.364 [2024-12-10 12:36:31.455759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.364 [2024-12-10 12:36:31.455765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.364 [2024-12-10 12:36:31.456677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.364 [2024-12-10 12:36:31.457013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.364 [2024-12-10 12:36:31.457118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.364 [2024-12-10 12:36:31.457123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.364 [2024-12-10 12:36:31.457140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.364 [2024-12-10 12:36:31.457149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.364 [2024-12-10 12:36:31.457125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.364 [2024-12-10 12:36:31.457334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.364 [2024-12-10 12:36:31.457516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.364 [2024-12-10 12:36:31.457524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.364 [2024-12-10 12:36:31.457531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.364 [2024-12-10 12:36:31.457537] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.364 [2024-12-10 12:36:31.469817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.364 [2024-12-10 12:36:31.470277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.364 [2024-12-10 12:36:31.470298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.364 [2024-12-10 12:36:31.470307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.364 [2024-12-10 12:36:31.470491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.364 [2024-12-10 12:36:31.470672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.364 [2024-12-10 12:36:31.470681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.364 [2024-12-10 12:36:31.470688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.364 [2024-12-10 12:36:31.470695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.364 [2024-12-10 12:36:31.482953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.364 [2024-12-10 12:36:31.483413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.364 [2024-12-10 12:36:31.483434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.364 [2024-12-10 12:36:31.483442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.364 [2024-12-10 12:36:31.483623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.364 [2024-12-10 12:36:31.483803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.364 [2024-12-10 12:36:31.483811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.364 [2024-12-10 12:36:31.483818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.364 [2024-12-10 12:36:31.483825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.364 [2024-12-10 12:36:31.496088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.364 [2024-12-10 12:36:31.496539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.364 [2024-12-10 12:36:31.496559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.364 [2024-12-10 12:36:31.496568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.364 [2024-12-10 12:36:31.496746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.364 [2024-12-10 12:36:31.496925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.364 [2024-12-10 12:36:31.496933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.364 [2024-12-10 12:36:31.496940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.364 [2024-12-10 12:36:31.496947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.364 [2024-12-10 12:36:31.509215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.364 [2024-12-10 12:36:31.509577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.364 [2024-12-10 12:36:31.509597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.364 [2024-12-10 12:36:31.509605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.364 [2024-12-10 12:36:31.509785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.364 [2024-12-10 12:36:31.509965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.364 [2024-12-10 12:36:31.509980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.364 [2024-12-10 12:36:31.509987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.364 [2024-12-10 12:36:31.509994] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.364 [2024-12-10 12:36:31.522423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.364 [2024-12-10 12:36:31.522864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.364 [2024-12-10 12:36:31.522882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.364 [2024-12-10 12:36:31.522890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.364 [2024-12-10 12:36:31.523068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.364 [2024-12-10 12:36:31.523255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.364 [2024-12-10 12:36:31.523264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.364 [2024-12-10 12:36:31.523271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.364 [2024-12-10 12:36:31.523278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.623 [2024-12-10 12:36:31.535805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.623 [2024-12-10 12:36:31.536181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.623 [2024-12-10 12:36:31.536210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.623 [2024-12-10 12:36:31.536222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.623 [2024-12-10 12:36:31.536424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.623 [2024-12-10 12:36:31.536624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.623 [2024-12-10 12:36:31.536639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.623 [2024-12-10 12:36:31.536649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.623 [2024-12-10 12:36:31.536659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.623 [2024-12-10 12:36:31.549258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.623 [2024-12-10 12:36:31.549661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.623 [2024-12-10 12:36:31.549685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.623 [2024-12-10 12:36:31.549695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.623 [2024-12-10 12:36:31.549920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.623 [2024-12-10 12:36:31.550140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.623 [2024-12-10 12:36:31.550151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.623 [2024-12-10 12:36:31.550168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.623 [2024-12-10 12:36:31.550183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.623 [2024-12-10 12:36:31.562433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.623 [2024-12-10 12:36:31.562875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.623 [2024-12-10 12:36:31.562893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.623 [2024-12-10 12:36:31.562901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.623 [2024-12-10 12:36:31.563080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.623 [2024-12-10 12:36:31.563265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.623 [2024-12-10 12:36:31.563274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.623 [2024-12-10 12:36:31.563281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.623 [2024-12-10 12:36:31.563287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.623 [2024-12-10 12:36:31.575552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.623 [2024-12-10 12:36:31.575886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.623 [2024-12-10 12:36:31.575903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.623 [2024-12-10 12:36:31.575911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.623 [2024-12-10 12:36:31.576089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.623 [2024-12-10 12:36:31.576274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.623 [2024-12-10 12:36:31.576283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.623 [2024-12-10 12:36:31.576289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.623 [2024-12-10 12:36:31.576296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.623 [2024-12-10 12:36:31.588725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.623 [2024-12-10 12:36:31.588995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.623 [2024-12-10 12:36:31.589012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.623 [2024-12-10 12:36:31.589019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.623 [2024-12-10 12:36:31.589206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.623 [2024-12-10 12:36:31.589385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.623 [2024-12-10 12:36:31.589394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.623 [2024-12-10 12:36:31.589401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.623 [2024-12-10 12:36:31.589408] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.623 [2024-12-10 12:36:31.593420] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.623 [2024-12-10 12:36:31.601827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.623 [2024-12-10 12:36:31.602264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.623 [2024-12-10 12:36:31.602282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.623 [2024-12-10 12:36:31.602289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.623 [2024-12-10 12:36:31.602467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.623 [2024-12-10 12:36:31.602645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.623 [2024-12-10 12:36:31.602653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.623 [2024-12-10 12:36:31.602660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.623 [2024-12-10 12:36:31.602666] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.623 [2024-12-10 12:36:31.614937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.623 [2024-12-10 12:36:31.615372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.623 [2024-12-10 12:36:31.615389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.623 [2024-12-10 12:36:31.615396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.623 [2024-12-10 12:36:31.615573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.623 [2024-12-10 12:36:31.615751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.623 [2024-12-10 12:36:31.615760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.623 [2024-12-10 12:36:31.615766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.623 [2024-12-10 12:36:31.615773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.623 [2024-12-10 12:36:31.628051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.623 [2024-12-10 12:36:31.628447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.623 [2024-12-10 12:36:31.628464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.623 [2024-12-10 12:36:31.628477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.623 [2024-12-10 12:36:31.628655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.623 [2024-12-10 12:36:31.628846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.623 [2024-12-10 12:36:31.628856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.623 [2024-12-10 12:36:31.628862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.623 [2024-12-10 12:36:31.628869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.623 Malloc0 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.623 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.623 [2024-12-10 12:36:31.641150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.624 [2024-12-10 12:36:31.641471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.624 [2024-12-10 12:36:31.641488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.624 [2024-12-10 12:36:31.641496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.624 [2024-12-10 12:36:31.641674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.624 [2024-12-10 12:36:31.641853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.624 [2024-12-10 12:36:31.641862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.624 [2024-12-10 12:36:31.641869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.624 [2024-12-10 12:36:31.641875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.624 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.624 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:09.624 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.624 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.624 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.624 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.624 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.624 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:09.624 [2024-12-10 12:36:31.654301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.624 [2024-12-10 12:36:31.654718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:09.624 [2024-12-10 12:36:31.654735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11061a0 with addr=10.0.0.2, port=4420 00:28:09.624 [2024-12-10 12:36:31.654743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11061a0 is same with the state(6) to be set 00:28:09.624 [2024-12-10 12:36:31.654749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.624 [2024-12-10 12:36:31.654922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11061a0 (9): Bad file descriptor 00:28:09.624 [2024-12-10 12:36:31.655102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:09.624 [2024-12-10 12:36:31.655110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:09.624 [2024-12-10 12:36:31.655117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:09.624 [2024-12-10 12:36:31.655123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:09.624 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.624 12:36:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1784343 00:28:09.624 [2024-12-10 12:36:31.667372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:09.882 4746.00 IOPS, 18.54 MiB/s [2024-12-10T11:36:32.050Z] [2024-12-10 12:36:31.811925] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:11.767 5528.00 IOPS, 21.59 MiB/s [2024-12-10T11:36:34.887Z] 6241.25 IOPS, 24.38 MiB/s [2024-12-10T11:36:35.822Z] 6801.67 IOPS, 26.57 MiB/s [2024-12-10T11:36:36.758Z] 7231.70 IOPS, 28.25 MiB/s [2024-12-10T11:36:38.135Z] 7591.18 IOPS, 29.65 MiB/s [2024-12-10T11:36:39.072Z] 7886.08 IOPS, 30.81 MiB/s [2024-12-10T11:36:40.008Z] 8143.92 IOPS, 31.81 MiB/s [2024-12-10T11:36:40.945Z] 8357.29 IOPS, 32.65 MiB/s [2024-12-10T11:36:40.945Z] 8538.73 IOPS, 33.35 MiB/s 00:28:18.777 Latency(us) 00:28:18.777 [2024-12-10T11:36:40.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.777 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:18.777 Verification LBA range: start 0x0 length 0x4000 00:28:18.777 Nvme1n1 : 15.01 8540.65 33.36 11104.99 0.00 6495.39 439.87 16868.40 00:28:18.777 [2024-12-10T11:36:40.945Z] =================================================================================================================== 00:28:18.777 [2024-12-10T11:36:40.945Z] Total : 8540.65 33.36 11104.99 0.00 6495.39 439.87 16868.40 00:28:18.777 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:18.777 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:18.777 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.777 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:18.777 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.777 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:18.777 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:18.777 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:18.777 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.036 rmmod nvme_tcp 00:28:19.036 rmmod nvme_fabrics 00:28:19.036 rmmod nvme_keyring 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1785265 ']' 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1785265 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1785265 ']' 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1785265 00:28:19.036 12:36:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:19.036 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.036 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1785265 00:28:19.037 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:19.037 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:19.037 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1785265' 00:28:19.037 killing process with pid 1785265 00:28:19.037 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1785265 00:28:19.037 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1785265 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.296 12:36:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.203 12:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:21.203 00:28:21.203 real 0m26.160s 00:28:21.203 user 1m0.990s 00:28:21.203 sys 0m6.806s 00:28:21.203 12:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.203 12:36:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:21.203 ************************************ 00:28:21.203 END TEST nvmf_bdevperf 00:28:21.203 ************************************ 00:28:21.203 12:36:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:21.203 12:36:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:21.203 12:36:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.203 12:36:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.463 ************************************ 00:28:21.463 START TEST nvmf_target_disconnect 00:28:21.463 ************************************ 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:21.463 * Looking for test storage... 00:28:21.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:21.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.463 --rc genhtml_branch_coverage=1 00:28:21.463 --rc genhtml_function_coverage=1 00:28:21.463 --rc genhtml_legend=1 00:28:21.463 --rc geninfo_all_blocks=1 00:28:21.463 --rc geninfo_unexecuted_blocks=1 00:28:21.463 00:28:21.463 ' 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:21.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.463 --rc genhtml_branch_coverage=1 00:28:21.463 --rc genhtml_function_coverage=1 00:28:21.463 --rc genhtml_legend=1 00:28:21.463 --rc geninfo_all_blocks=1 00:28:21.463 --rc geninfo_unexecuted_blocks=1 00:28:21.463 00:28:21.463 ' 00:28:21.463 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:21.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.463 --rc genhtml_branch_coverage=1 00:28:21.463 --rc genhtml_function_coverage=1 00:28:21.463 --rc genhtml_legend=1 00:28:21.463 --rc geninfo_all_blocks=1 00:28:21.463 --rc geninfo_unexecuted_blocks=1 00:28:21.463 00:28:21.463 ' 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:21.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.464 --rc genhtml_branch_coverage=1 00:28:21.464 --rc genhtml_function_coverage=1 00:28:21.464 --rc genhtml_legend=1 00:28:21.464 --rc geninfo_all_blocks=1 00:28:21.464 --rc geninfo_unexecuted_blocks=1 00:28:21.464 00:28:21.464 ' 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:21.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:21.464 12:36:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.035 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:28.036 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:28.036 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:28.036 Found net devices under 0000:86:00.0: cvl_0_0 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:28.036 Found net devices under 0000:86:00.1: cvl_0_1 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:28.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:28.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:28:28.036 00:28:28.036 --- 10.0.0.2 ping statistics --- 00:28:28.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.036 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:28.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:28.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:28:28.036 00:28:28.036 --- 10.0.0.1 ping statistics --- 00:28:28.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:28.036 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:28.036 ************************************ 00:28:28.036 START TEST nvmf_target_disconnect_tc1 00:28:28.036 ************************************ 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.036 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect ]] 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:28.037 [2024-12-10 12:36:49.680104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.037 [2024-12-10 12:36:49.680148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2dac0 with addr=10.0.0.2, port=4420 00:28:28.037 [2024-12-10 12:36:49.680178] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:28.037 [2024-12-10 12:36:49.680187] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:28.037 [2024-12-10 12:36:49.680194] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:28.037 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:28.037 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect: errors occurred 00:28:28.037 Initializing NVMe Controllers 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:28.037 00:28:28.037 real 0m0.119s 00:28:28.037 user 0m0.047s 00:28:28.037 sys 0m0.072s 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 ************************************ 00:28:28.037 END TEST nvmf_target_disconnect_tc1 00:28:28.037 ************************************ 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 ************************************ 00:28:28.037 START TEST nvmf_target_disconnect_tc2 00:28:28.037 ************************************ 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1790435 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1790435 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1790435 ']' 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:28.037 12:36:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 [2024-12-10 12:36:49.824518] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:28.037 [2024-12-10 12:36:49.824559] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:28.037 [2024-12-10 12:36:49.902455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:28.037 [2024-12-10 12:36:49.944755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:28.037 [2024-12-10 12:36:49.944792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:28.037 [2024-12-10 12:36:49.944800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:28.037 [2024-12-10 12:36:49.944808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:28.037 [2024-12-10 12:36:49.944813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:28.037 [2024-12-10 12:36:49.946484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:28.037 [2024-12-10 12:36:49.946609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:28.037 [2024-12-10 12:36:49.946732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:28.037 [2024-12-10 12:36:49.946733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 Malloc0 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 [2024-12-10 12:36:50.125074] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 [2024-12-10 12:36:50.157350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:28.037 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.038 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1790461 00:28:28.038 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:28.038 12:36:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.601 12:36:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1790435 00:28:30.601 12:36:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:30.601 Read completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Read completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Read completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Read completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Read completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Write completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Read completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Read completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Read completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Write completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Read completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Read completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Write completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Write completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Write completed with error (sct=0, sc=8) 00:28:30.601 starting I/O failed 00:28:30.601 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 [2024-12-10 12:36:52.185630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 [2024-12-10 12:36:52.185838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 [2024-12-10 12:36:52.186038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Read completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 Write completed with error (sct=0, sc=8) 00:28:30.602 starting I/O failed 00:28:30.602 [2024-12-10 12:36:52.186245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:30.603 [2024-12-10 12:36:52.186361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.186383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.186538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.186548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.186696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.186706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.186781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.186790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.186919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.186929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.186996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.187066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.187206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.187299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.187381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.187466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.187529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.187623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.187702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.187840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.187965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.187974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.188102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.188111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.188263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.188273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.188415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.188425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.188555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.188566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.188633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.188646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.188734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.188746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.188883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.188892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.188959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.188969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.189144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.189154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.189249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.189259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.189386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.189397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.189468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.189478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.189642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.189653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.189733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.189743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.189952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.189962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.190044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.190053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.190127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.190136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.190291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.190304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.190451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.190463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.190594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.190604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.190745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.190755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.190900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.190910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.603 qpair failed and we were unable to recover it. 00:28:30.603 [2024-12-10 12:36:52.190990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.603 [2024-12-10 12:36:52.190999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.191144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.191155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.191292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.191302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.191470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.191480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.191774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.191784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.191916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.191929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.192128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.192139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.192223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.192232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.192379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.192389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.192532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.192544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.192690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.192702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.192849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.192861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.193092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.193122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.193240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.193273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.193479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.193509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.193679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.193711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.194009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.194038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.194205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.194216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.194319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.194328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.194405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.194414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.194554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.194564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.194742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.194752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.194881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.194891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.194962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.194972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.195112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.195123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.195191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.195201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.195276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.195285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.195377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.195387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.195456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.195466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.195528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.195537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.195658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.195667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.195857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.195867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.196084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.196095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.196218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.196229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.196459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.196470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.196532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.196544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.196707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.196718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.196961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.196971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.197096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.197107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.604 qpair failed and we were unable to recover it. 00:28:30.604 [2024-12-10 12:36:52.197193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.604 [2024-12-10 12:36:52.197202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.197259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.197269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.197340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.197350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.197439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.197449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.197504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.197513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.197674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.197683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.197812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.197821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.197966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.197976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.198047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.198059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.198143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.198154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.198241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.198251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.198390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.198399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.198541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.198552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.198672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.198683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.198844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.198854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.198993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.199003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.199138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.199150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.199232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.199244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.199363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.199377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.199441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.199451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.199525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.199534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.199674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.199684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.199839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.199849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.199916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.199940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.200038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.200050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.200131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.200145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.200304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.200332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.200421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.200436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.200571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.200585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.200738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.200751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.201042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.201073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.201247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.201281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.201452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.201484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.201616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.201646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.201864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.201895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.202062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.202093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.202290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.202309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.202457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.202473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.202541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.202554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.605 qpair failed and we were unable to recover it. 00:28:30.605 [2024-12-10 12:36:52.202727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.605 [2024-12-10 12:36:52.202740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.202886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.202918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.203138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.203195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.203334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.203365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.203623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.203654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.203848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.203880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.204058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.204072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.204141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.204154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.204295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.204309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.204441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.204455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.204607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.204621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.204780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.204793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.204866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.204879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.205058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.205072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.205206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.205220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.205420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.205434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.205513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.205526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.205593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.205606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.205765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.205779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.205908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.205922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.206053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.206067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.206195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.206209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.206291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.206305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.206379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.206392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.206542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.206556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.206684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.206698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.206982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.207014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.207134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.207174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.207344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.207376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.207568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.207599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.207783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.606 [2024-12-10 12:36:52.207814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.606 qpair failed and we were unable to recover it. 00:28:30.606 [2024-12-10 12:36:52.207987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.208017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.208195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.208209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.208386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.208417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.208607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.208639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.208759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.208790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.208967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.208981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.209112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.209130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.209216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.209230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.209472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.209490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.209588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.209603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.209787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.209802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.209948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.209962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.210035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.210048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.210140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.210164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.210247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.210264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.210402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.210419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.210511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.210529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.210640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.210676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.210855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.210885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.211002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.211034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.211304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.211323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.211397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.211413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.211510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.211527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.211678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.211697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.211792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.211809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.212026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.212044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.212253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.212273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.212444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.212464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.212625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.212643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.212816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.212833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.213048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.213068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.213214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.213233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.213311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.213329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.213500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.213518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.213745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.213763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.213914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.213932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.607 qpair failed and we were unable to recover it. 00:28:30.607 [2024-12-10 12:36:52.214103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.607 [2024-12-10 12:36:52.214142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.214324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.214355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.214640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.214671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.214976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.215007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.215187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.215220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.215386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.215416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.215602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.215633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.215745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.215775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.216034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.216065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.216351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.216369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.216579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.216600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.216740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.216758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.216897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.216915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.217063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.217081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.217226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.217244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.217383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.217401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.217490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.217508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.217669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.217688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.217865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.217883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.218068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.218086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.218285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.218317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.218495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.218526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.218665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.218696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.218890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.218921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.219056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.219076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.219252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.219294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.219465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.219496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.219725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.219759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.219944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.219961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.220197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.220216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.220304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.220322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.220466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.220496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.220628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.220660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.220849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.220880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.220981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.221012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.221120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.221150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.221424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.221457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.608 qpair failed and we were unable to recover it. 00:28:30.608 [2024-12-10 12:36:52.221632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.608 [2024-12-10 12:36:52.221663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.221798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.221829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.222037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.222068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.222202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.222235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.222340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.222371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.222503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.222534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.222701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.222731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.222950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.222981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.223106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.223137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.223331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.223363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.223630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.223660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.223827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.223858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.224124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.224154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.224337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.224377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.224553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.224622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.224866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.224901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.225072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.225104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.225257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.225290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.225560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.225592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.225771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.225803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.225983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.226020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.226303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.226336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.226608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.226638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.226842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.226874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.226988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.227020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.227209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.227241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.227419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.227450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.227596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.227628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.227807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.227838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.228032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.228063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.228178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.228210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.228475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.228506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.228622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.228652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.228772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.228804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.228918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.228948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.229088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.229119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.229320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.229352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.229462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.609 [2024-12-10 12:36:52.229492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.609 qpair failed and we were unable to recover it. 00:28:30.609 [2024-12-10 12:36:52.229678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.229708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.229895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.229927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.230097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.230145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.230371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.230404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.230579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.230610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.230867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.230898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.231070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.231101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.231368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.231400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.231523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.231555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.231729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.231758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.231932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.231963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.232145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.232187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.232412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.232444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.232581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.232612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.232797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.232828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.232995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.233024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.233258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.233291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.233464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.233495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.233623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.233653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.233995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.234025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.234217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.234249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.234454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.234486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.234598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.234628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.234826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.234856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.235107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.235138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.235352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.235383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.235510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.235540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.235655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.235685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.235804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.235834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.235937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.235967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.236175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.236208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.236328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.236357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.236475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.236505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.236719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.236750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.236965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.236996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.237127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.237167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.237291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.237322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.237513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.237544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.237749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.237779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.610 qpair failed and we were unable to recover it. 00:28:30.610 [2024-12-10 12:36:52.238032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.610 [2024-12-10 12:36:52.238063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.238275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.238308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.238496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.238527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.238719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.238750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.238877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.238908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.239015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.239045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.239221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.239252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.239381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.239411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.239550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.239580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.239697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.239727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.239935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.239967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.240081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.240111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.240254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.240286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.240477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.240507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.240696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.240726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.240996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.241027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.241309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.241340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.241455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.241485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.241634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.241665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.241929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.241959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.242126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.242155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.242333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.242363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.242551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.242582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.242798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.242827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.243049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.243080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.243212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.243245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.243415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.243446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.243556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.243586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.243764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.243793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.243909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.243939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.244111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.244140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.244318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.244356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.244617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.244647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.244856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.611 [2024-12-10 12:36:52.244886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.611 qpair failed and we were unable to recover it. 00:28:30.611 [2024-12-10 12:36:52.245056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.245086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.245365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.245396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.245519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.245549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.245789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.245820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.246017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.246047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.246165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.246197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.246321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.246352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.246594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.246623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.246820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.246851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.246970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.247000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.247111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.247142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.247360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.247391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.247560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.247590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.247895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.247924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.248231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.248263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.248448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.248478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.248672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.248702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.248972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.249002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.249125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.249164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.249293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.249324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.249447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.249478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.249670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.249700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.249915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.249945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.250193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.250225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.250441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.250484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.250607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.250638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.250841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.250871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.251084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.251114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.251316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.251348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.251535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.251566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.251744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.251775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.251963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.251993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.252107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.252137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.252325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.252356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.252459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.252488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.252609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.252639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.252936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.252966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.612 [2024-12-10 12:36:52.253217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.612 [2024-12-10 12:36:52.253249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.612 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.253455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.253486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.253753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.253783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.253969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.254000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.254115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.254146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.254269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.254300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.254470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.254501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.254640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.254670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.254789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.254820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.255046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.255077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.255265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.255315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.255441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.255470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.255590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.255620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.255789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.255821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.255989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.256018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.256217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.256248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.256449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.256480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.256673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.256705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.256899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.256930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.257054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.257085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.257214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.257245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.257433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.257463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.257598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.257628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.257904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.257934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.258240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.258271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.258390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.258422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.258590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.258619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.258875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.258906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.259096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.259126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.259293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.259324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.259495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.259526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.259717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.259748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.259864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.259893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.260140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.260179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.260285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.260314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.260432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.260463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.260677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.260708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.261006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.261037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.613 [2024-12-10 12:36:52.261156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.613 [2024-12-10 12:36:52.261197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.613 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.261326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.261356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.261570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.261601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.261716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.261747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.261924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.261954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.262134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.262174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.262449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.262478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.262600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.262629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.263007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.263038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.263234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.263267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.263460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.263490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.263609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.263639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.263885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.263916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.264156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.264197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.264463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.264494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.264737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.264767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.264957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.264986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.265174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.265211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.265332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.265361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.265540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.265570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.265877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.265907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.266078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.266108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.266253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.266284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.266478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.266509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.266677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.266706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.266892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.266922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.267101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.267130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.267285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.267315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.267650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.267680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.267876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.267907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.268009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.268039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.268202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.268234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.268363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.268392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.268615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.268646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.268748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.268778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.268949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.268978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.269252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.269284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.269527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.614 [2024-12-10 12:36:52.269558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.614 qpair failed and we were unable to recover it. 00:28:30.614 [2024-12-10 12:36:52.269809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.269840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.270108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.270139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.270334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.270365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.270562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.270592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.270847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.270877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.271045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.271076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.271362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.271400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.271641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.271673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.271788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.271817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.271938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.271967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.272084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.272114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.272253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.272284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.272419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.272449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.272582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.272614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.272898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.272929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.273050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.273080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.273275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.273307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.273485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.273516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.273841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.273872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.274082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.274113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.274319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.274351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.274620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.274650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.274863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.274895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.275069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.275100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.275325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.275356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.275554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.275584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.275832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.275863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.276069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.276099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.276223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.276254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.276462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.276497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.276620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.276651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.276762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.615 [2024-12-10 12:36:52.276794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.615 qpair failed and we were unable to recover it. 00:28:30.615 [2024-12-10 12:36:52.277109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.277140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.277359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.277397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.277615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.277646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.277926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.277957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.278129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.278170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.278343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.278374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.278549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.278579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.278775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.278806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.278923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.278953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.279134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.279175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.279352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.279382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.279551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.279581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.279721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.279753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.279922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.279953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.280146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.280198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.280424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.280456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.280696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.280727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.280900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.280931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.281118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.281148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.281406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.281438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.281709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.281738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.281853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.281883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.282139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.282182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.282308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.282340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.282479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.282510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.282617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.282647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.282934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.282965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.283155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.283196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.283372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.283402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.283652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.283684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.283921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.283952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.284134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.284189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.284310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.284341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.284545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.284575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.284746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.284776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.285021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.285052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.285225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.285256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.285392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.285441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.285636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.285666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.616 qpair failed and we were unable to recover it. 00:28:30.616 [2024-12-10 12:36:52.285930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.616 [2024-12-10 12:36:52.285961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.286076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.286106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.286301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.286331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.286509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.286538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.286839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.286871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.287082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.287111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.287259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.287290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.287489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.287520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.287711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.287741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.287973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.288004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.288132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.288169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.288343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.288373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.288493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.288524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.288710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.288740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.288862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.288892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.289067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.289097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.289218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.289250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.289467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.289497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.289693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.289724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.289986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.290016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.290216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.290247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.290381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.290411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.290581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.290612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.290739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.290769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.290938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.290967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.291144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.291183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.291332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.291363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.291537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.291567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.291765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.291796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.291910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.291940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.292064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.292099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.292247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.292278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.292496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.292527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.292642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.292672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.292853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.292885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.293146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.293187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.293378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.293408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.293697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.293729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.294010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.294041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.617 [2024-12-10 12:36:52.294219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.617 [2024-12-10 12:36:52.294251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.617 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.294391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.294421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.294621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.294650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.294837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.294867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.294990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.295020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.295281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.295314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.295509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.295540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.295662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.295692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.295903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.295934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.296120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.296151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.296332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.296363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.296593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.296624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.296750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.296780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.296960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.296991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.297176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.297209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.297461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.297494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.297687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.297717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.297841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.297871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.298166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.298204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.298324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.298355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.298540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.298570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.298687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.298718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.298915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.298946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.299117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.299148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.299281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.299313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.299490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.299520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.299651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.299682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.299903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.299934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.300118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.300147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.300413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.300444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.300632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.300662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.300976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.301007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.301270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.301303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.301429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.301461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.301648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.301677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.301943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.301975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.302170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.302201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.302400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.302431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.302554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.302584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.302830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.618 [2024-12-10 12:36:52.302861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.618 qpair failed and we were unable to recover it. 00:28:30.618 [2024-12-10 12:36:52.303044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.303075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.303254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.303285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.303463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.303494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.303686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.303715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.303886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.303916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.304183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.304216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.304431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.304462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.304636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.304667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.304919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.304951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.305123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.305154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.305367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.305398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.305668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.305699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.305832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.305863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.306149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.306190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.306393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.306425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.306553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.306582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.306777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.306808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.306926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.306957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.307143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.307185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.307371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.307402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.307578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.307608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.307726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.307754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.307942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.307974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.308218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.308251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.308458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.308489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.308664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.308696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.308982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.309013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.309133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.309173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.309347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.309378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.309577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.309608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.309877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.309907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.310113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.310143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.310328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.310358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.310544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.310578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.310800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.310832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.311025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.311057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.311238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.311271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.311411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.311446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.619 [2024-12-10 12:36:52.311574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.619 [2024-12-10 12:36:52.311604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.619 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.311724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.311755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.311890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.311920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.312202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.312235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.312413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.312444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.312567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.312599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.312890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.312923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.313118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.313151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.313306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.313344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.313572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.313603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.313824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.313856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.314056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.314086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.314285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.314317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.314519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.314548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.314737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.314769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.314896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.314927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.315045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.315075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.315189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.315222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.315397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.315428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.315701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.315732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.315945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.315976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.316174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.316207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.316395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.316425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.316546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.316575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.316780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.316813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.317030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.317061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.317283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.317316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.317507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.317539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.317719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.317749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.317923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.317954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.318182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.318214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.318345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.318375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.318579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.620 [2024-12-10 12:36:52.318611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.620 qpair failed and we were unable to recover it. 00:28:30.620 [2024-12-10 12:36:52.318749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.318780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.318978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.319008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.319127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.319173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.319376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.319407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.319521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.319554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.319751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.319783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.320050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.320084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.320323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.320357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.320603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.320635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.320974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.321005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.321122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.321152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.321336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.321368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.321490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.321521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.321719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.321750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.321925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.321955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.322169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.322200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.322412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.322445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.322657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.322686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.322892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.322924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.323131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.323172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.323442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.323474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.323615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.323646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.323772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.323802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.324001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.324032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.324152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.324196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.324379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.324411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.324522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.324554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.324751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.324782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.325015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.325050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.325198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.325239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.325429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.325461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.325610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.325643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.325782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.325814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.326092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.326127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.326340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.326373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.326552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.326584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.326791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.326822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.327016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.327046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.327171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.621 [2024-12-10 12:36:52.327203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.621 qpair failed and we were unable to recover it. 00:28:30.621 [2024-12-10 12:36:52.327408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.327441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.327586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.327617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.327751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.327781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.327907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.327940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.328257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.328333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.328493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.328529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.328732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.328765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.329067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.329100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.329234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.329268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.329391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.329424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.329574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.329608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.329716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.329748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.329939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.329970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.330098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.330130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.330349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.330382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.330581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.330613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.330983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.331015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.331208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.331252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.331387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.331420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.331572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.331603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.331802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.331834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.332039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.332071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.332265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.332298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.332427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.332460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.332590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.332621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.332805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.332837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.333031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.333062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.333281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.333314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.333440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.333473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.333586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.333618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.333734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.333767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.333952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.333984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.334168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.334203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.334341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.334373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.334523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.334555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.334696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.334729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.334866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.622 [2024-12-10 12:36:52.334900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.622 qpair failed and we were unable to recover it. 00:28:30.622 [2024-12-10 12:36:52.335101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.335133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.335336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.335368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.335552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.335583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.335779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.335811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.336003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.336034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.336250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.336283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.336393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.336425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.336711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.336803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.337027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.337063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.337201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.337235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.337398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.337429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.337646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.337676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.337827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.337857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.338049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.338081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.338214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.338246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.338369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.338400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.338529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.338559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.338736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.338768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.339109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.339140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.339463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.339496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.339691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.339731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.339858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.339889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.340069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.340100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.340230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.340261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.340445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.340477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.340733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.340763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.340959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.340990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.341184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.341217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.341406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.341437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.341637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.341668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.341803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.341834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.341955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.341985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.342176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.342209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.342403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.342434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.342630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.342661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.342881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.342911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.343114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.623 [2024-12-10 12:36:52.343145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.623 qpair failed and we were unable to recover it. 00:28:30.623 [2024-12-10 12:36:52.343278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.343309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.343505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.343536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.343797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.343827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.344123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.344154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.344370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.344401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.344558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.344589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.344729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.344759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.345045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.345077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.345211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.345242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.345356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.345385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.345600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.345632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.345910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.345941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.346165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.346198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.346377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.346409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.346587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.346618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.346858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.346889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.347124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.347155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.347298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.347327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.347503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.347533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.347662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.347691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.347910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.347942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.348136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.348182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.348388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.348418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.348635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.348672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.348890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.348922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.349153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.349196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.349321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.349350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.349573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.349603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.349873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.349903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.350026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.350056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.350270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.624 [2024-12-10 12:36:52.350303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.624 qpair failed and we were unable to recover it. 00:28:30.624 [2024-12-10 12:36:52.350429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.350459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.350665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.350697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.351003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.351034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.351299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.351332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.351550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.351581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.351714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.351744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.352037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.352069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.352295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.352327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.352519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.352549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.352668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.352699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.353001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.353032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.353231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.353264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.353472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.353503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.353694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.353725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.353912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.353943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.354126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.354168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.354360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.354391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.354597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.354627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.354911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.354943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.355069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.355100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.355221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.355253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.355384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.355414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.355610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.355640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.355896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.355927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.356131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.356171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.356458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.356489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.356670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.356701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.356996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.357027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.357214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.357246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.357385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.357416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.357562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.357591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.357713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.357742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.357923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.357960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.358155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.358197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.358379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.358411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.358638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.358669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.358893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.625 [2024-12-10 12:36:52.358924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.625 qpair failed and we were unable to recover it. 00:28:30.625 [2024-12-10 12:36:52.359104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.359134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.359341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.359371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.359579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.359608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.359823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.359854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.360032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.360063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.360340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.360372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.360507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.360539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.360666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.360696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.360905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.360936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.361229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.361263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.361398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.361428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.361682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.361712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.361840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.361871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.361996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.362027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.362139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.362179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.362302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.362333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.362533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.362564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.362934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.362966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.363225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.363258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.363474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.363505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.363809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.363840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.363950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.363982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.364175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.364208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.364416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.364447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.364652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.364684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.364810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.364839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.365136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.365179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.365360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.365391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.365526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.365557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.365702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.365733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.365920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.365951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.366222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.366255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.366481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.366513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.366636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.366667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.366988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.367018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.367200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.626 [2024-12-10 12:36:52.367239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.626 qpair failed and we were unable to recover it. 00:28:30.626 [2024-12-10 12:36:52.367358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.367390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.367536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.367565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.367822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.367853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.368165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.368198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.368457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.368492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.368694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.368725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.368901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.368932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.369122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.369153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.369367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.369397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.369528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.369557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.369701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.369733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.369944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.369975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.370227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.370260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.370522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.370553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.370664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.370695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.370915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.370946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.371153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.371194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.371374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.371405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.371636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.371667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.371905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.371937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.372217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.372249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.372378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.372408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.372541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.372570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.372693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.372724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.372933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.372962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.373173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.373205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.373350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.373381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.373491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.373523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.373723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.373757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.373935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.373966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.374148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.374211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.374432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.374464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.374730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.374761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.374954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.374984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.375284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.375317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.375500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.375531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.375728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.375759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.375938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.375969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.627 qpair failed and we were unable to recover it. 00:28:30.627 [2024-12-10 12:36:52.376180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.627 [2024-12-10 12:36:52.376214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.376359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.376391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.376618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.376650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.376783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.376813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.376998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.377028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.377152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.377191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.377394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.377426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.377613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.377644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.377786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.377816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.377993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.378024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.378204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.378237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.378416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.378448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.378717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.378748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.378875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.378906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.379116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.379146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.379432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.379464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.379612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.379642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.379778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.379807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.380074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.380106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.380250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.380282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.380424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.380453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.380653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.380683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.380875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.380904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.381107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.381139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.381331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.381362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.381559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.381591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.381723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.381756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.381886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.381915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.382125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.382172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.382354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.382385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.382521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.382553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.382683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.382713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.382910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.382942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.383122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.383153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.383355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.383387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.383520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.383550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.383804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.383836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.384043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.384074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.628 [2024-12-10 12:36:52.384298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.628 [2024-12-10 12:36:52.384331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.628 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.384533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.384562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.384692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.384720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.384993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.385025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.385156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.385198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.385326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.385358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.385502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.385532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.385644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.385675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.385892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.385923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.386125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.386164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.386345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.386376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.386510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.386539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.386763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.386794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.387046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.387077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.387261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.387294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.387408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.387438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.387576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.387606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.387826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.387857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.388133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.388196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.388329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.388359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.388541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.388571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.388705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.388737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.388944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.388974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.389194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.389227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.389530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.389562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.389691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.389722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.389933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.389962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.390169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.390200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.390423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.390453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.390602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.390632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.390848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.390886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.391073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.391103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.391318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.629 [2024-12-10 12:36:52.391351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.629 qpair failed and we were unable to recover it. 00:28:30.629 [2024-12-10 12:36:52.391622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.391653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.391775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.391806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.392069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.392100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.392334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.392366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.392489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.392518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.392643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.392674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.392806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.392837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.393015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.393045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.393275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.393306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.393503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.393533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.393882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.393914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.394130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.394169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.394446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.394477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.394764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.394795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.394994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.395025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.395212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.395244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.395449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.395481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.395754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.395786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.396103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.396134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.396299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.396330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.396455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.396484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.396719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.396750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.396939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.396968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.397217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.397248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.397393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.397424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.397646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.397679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.397950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.397981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.398197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.398229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.398363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.398395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.398535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.398566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.398711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.398741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.398952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.398983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.399208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.399241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.399382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.399413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.399594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.399625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.399804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.399834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.400017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.400048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.400229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.400267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.400461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.630 [2024-12-10 12:36:52.400492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.630 qpair failed and we were unable to recover it. 00:28:30.630 [2024-12-10 12:36:52.400671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.400702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.400999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.401030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.401210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.401242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.401378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.401408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.401541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.401571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.401767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.401798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.402075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.402106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.402368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.402400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.402580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.402610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.402806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.402837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.402962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.402993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.403242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.403275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.403462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.403493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.403628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.403658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.403838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.403869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.404060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.404091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.404262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.404292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.404497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.404527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.404704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.404734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.404996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.405024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.405207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.405239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.405421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.405454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.405600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.405632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.405833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.405863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.406054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.406083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.406310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.406342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.406464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.406495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.406630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.406662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.406814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.406844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.407021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.407051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.407256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.407292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.407427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.407457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.407660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.407692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.407890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.407922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.408178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.408211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.408444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.408475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.408613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.408643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.631 [2024-12-10 12:36:52.408936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.631 [2024-12-10 12:36:52.408968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.631 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.409148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.409232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.409349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.409379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.409518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.409549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.409824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.409855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.409985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.410015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.410195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.410227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.410417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.410447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.410644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.410676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.410883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.410914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.411094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.411125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.411339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.411372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.411576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.411608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.411847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.411879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.412057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.412088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.412326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.412359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.412561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.412591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.412715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.412747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.412928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.412960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.413084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.413114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.413314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.413347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.413580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.413612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.413744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.413776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.413958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.413989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.414107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.414137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.414262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.414293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.414430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.414459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.414654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.414686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.414881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.414913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.415195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.415229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.415344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.415375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.415580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.415609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.415871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.415904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.416082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.416112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.416327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.416359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.416561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.416591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.416778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.416810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.416992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.632 [2024-12-10 12:36:52.417024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.632 qpair failed and we were unable to recover it. 00:28:30.632 [2024-12-10 12:36:52.417318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.417352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.417538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.417569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.417680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.417713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.417999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.418038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.418333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.418366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.418549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.418581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.418806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.418837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.418945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.418976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.419280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.419314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.419456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.419488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.419667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.419699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.419904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.419936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.420147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.420190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.420343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.420373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.420580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.420611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.420831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.420863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.421113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.421145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.421298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.421332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.421509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.421540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.421692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.421723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.421846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.421877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.422002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.422034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.422182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.422214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.422465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.422495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.422626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.422657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.422865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.422897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.423088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.423119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.423428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.423461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.423675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.423708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.423886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.423917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.424050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.424082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.424213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.424246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.424375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.424408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.424585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.424617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.424910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.424942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.425069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.425100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.425341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.633 [2024-12-10 12:36:52.425374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.633 qpair failed and we were unable to recover it. 00:28:30.633 [2024-12-10 12:36:52.425578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.425609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.425801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.425831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.426006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.426037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.426217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.426251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.426520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.426552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.426675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.426706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.426915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.426953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.427104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.427135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.427357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.427389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.427521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.427553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.427731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.427762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.428019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.428052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.428284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.428317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.428550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.428583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.428882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.428915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.429037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.429067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.429210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.429242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.429542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.429575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.429785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.429816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.430016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.430047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.430314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.430347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.430554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.430586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.430878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.430910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.431184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.431217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.431444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.431476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.431681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.431713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.431894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.431925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.432050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.432080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.432290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.432322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.432516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.634 [2024-12-10 12:36:52.432547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.634 qpair failed and we were unable to recover it. 00:28:30.634 [2024-12-10 12:36:52.432658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.432688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.433015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.433046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.433231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.433264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.433529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.433561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.433843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.433876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.434212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.434245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.434424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.434455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.434729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.434761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.434874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.434905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.435082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.435113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.435313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.435344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.435623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.435654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.435918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.435950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.436146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.436188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.436315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.436345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.436594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.436624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.436802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.436839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.437171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.437204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.437459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.437490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.437631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.437664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.437867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.437899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.438188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.438234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.438445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.438476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.438595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.438627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.438856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.438888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.439099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.439131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.439300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.439332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.439457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.439487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.439677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.439708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.439963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.439994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.440207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.440239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.440519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.440556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.440770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.440801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.440934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.440967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.441252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.441285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.441557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.441591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.441879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.441910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.442190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.442223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.442360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.635 [2024-12-10 12:36:52.442392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.635 qpair failed and we were unable to recover it. 00:28:30.635 [2024-12-10 12:36:52.442535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.442567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.442759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.442790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.443017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.443048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.443253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.443285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.443622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.443698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.443954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.443992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.444353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.444391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.444602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.444634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.444823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.444855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.445056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.445090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.445296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.445330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.445508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.445540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.445809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.445842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.446025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.446057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.446322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.446359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.446543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.446575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.446897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.446929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.447130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.447171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.447477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.447509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.447633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.447664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.447862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.447893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.448110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.448140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.448403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.448436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.448739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.448773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.448901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.448931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.449119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.449151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.449306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.449341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.449474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.449504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.449774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.449809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.450019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.450052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.450185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.450218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.450447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.450481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.450675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.450707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.450906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.450941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.451223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.451256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.451392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.451425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.451633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.636 [2024-12-10 12:36:52.451669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.636 qpair failed and we were unable to recover it. 00:28:30.636 [2024-12-10 12:36:52.451876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.451909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.452089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.452119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.452326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.452360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.452567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.452599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.452884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.452917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.453064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.453095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.453324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.453358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.453510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.453540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.453762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.453793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.453975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.454005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.454211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.454244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.454754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.454795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.455074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.455110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.455387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.455421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.455555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.455586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.455794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.455827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.456126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.456166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.456369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.456400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.456579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.456610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.456730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.456761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.457035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.457068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.457197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.457243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.457376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.457409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.457552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.457584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.457780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.457811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.458101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.458132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.458414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.458453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.458582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.458613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.458739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.458771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.459043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.459077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.459267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.459301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.459439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.459470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.459685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.459718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.459924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.459956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.460145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.460188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.460325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.460356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.460488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.460520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.460773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.637 [2024-12-10 12:36:52.460805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.637 qpair failed and we were unable to recover it. 00:28:30.637 [2024-12-10 12:36:52.460994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.461031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.461227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.461260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.461522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.461555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.461678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.461708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.461899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.461929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.462112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.462144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.462292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.462328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.462611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.462641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.462950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.462981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.463188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.463223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.463408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.463445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.463645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.463677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.463808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.463840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.464042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.464079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.464278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.464308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.464532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.464564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.464779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.464810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.465064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.465099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.465242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.465279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.465564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.465604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.465830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.465868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.466074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.466110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.466436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.466473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.466632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.466667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.466806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.466844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.467039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.467080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.467281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.467314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.467580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.467620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.467914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.467952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.468182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.468218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.468510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.468546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.468808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.468841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.469135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.469180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.469396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.638 [2024-12-10 12:36:52.469429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.638 qpair failed and we were unable to recover it. 00:28:30.638 [2024-12-10 12:36:52.469626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.469659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.469951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.469988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.470182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.470215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.470414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.470446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.470633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.470665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.470887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.470924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.471064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.471094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.471237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.471272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.471454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.471489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.471756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.471796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.472055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.472095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.472372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.472408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.472644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.472677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.472920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.472964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.473208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.473253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.473465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.473500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.473693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.473725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.474010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.474046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.474263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.474305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.474513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.474550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.474690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.474724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.474933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.474974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.475194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.475230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.475502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.475535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.475725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.475759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.475992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.476028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.476189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.476226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.476416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.476452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.476601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.476636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.476781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.476815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.476931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.476961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.477189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.477233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.477390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.477426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.477607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.477639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.477855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.477890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.478071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.478104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.478297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.478331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.478603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.639 [2024-12-10 12:36:52.478648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.639 qpair failed and we were unable to recover it. 00:28:30.639 [2024-12-10 12:36:52.478782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.478815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.479141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.479185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.479509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.479542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.479828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.479861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.480068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.480100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.480385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.480419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.480701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.480743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.481028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.481061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.481259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.481298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.481430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.481463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.481664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.481707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.481994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.482030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.482270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.482303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.482575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.482609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.482816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.482860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.483087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.483125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.483404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.483442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.483696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.483733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.483930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.483967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.484148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.484206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.484399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.484432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.484640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.484672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.484970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.485003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.485195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.485231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.485419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.485451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.485656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.485700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.485994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.486026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.486233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.486274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.486582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.486616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.486925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.486958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.487147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.487208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.487346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.487383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.487584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.487620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.487886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.487925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.488059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.488090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.488372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.488405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.488587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.488620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.488846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.488879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.640 qpair failed and we were unable to recover it. 00:28:30.640 [2024-12-10 12:36:52.489072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.640 [2024-12-10 12:36:52.489107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.489262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.489296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.489501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.489533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.489665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.489696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.489929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.489962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.490182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.490217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.490503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.490534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.490755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.490789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.490923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.490955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.491194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.491230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.491433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.491465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.491742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.491777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.491960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.491993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.492185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.492219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.492435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.492469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.492645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.492677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.492884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.492918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.493115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.493147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.493349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.493381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.493563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.493598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.493717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.493749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.493937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.493969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.494153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.494200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.494431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.494463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.494605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.494638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.494790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.494824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.495057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.495096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.495292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.495329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.495535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.495572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.495693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.495725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.495996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.496030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.496198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.496233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.496523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.496557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.496838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.496872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.497183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.497238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.497424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.497456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.497650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.497689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.497870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.497907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.498106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.498140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.498332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.641 [2024-12-10 12:36:52.498368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.641 qpair failed and we were unable to recover it. 00:28:30.641 [2024-12-10 12:36:52.498498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.498530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.498819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.498856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.499061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.499095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.499261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.499294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.499477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.499510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.499713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.499747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.499883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.499917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.500040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.500076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.500277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.500311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.500444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.500475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.500687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.500719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.500945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.500978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.501173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.501205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.501405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.501438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.501629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.501666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.501862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.501901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.502186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.502219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.502376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.502408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.502604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.502641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.502868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.502902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.503027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.503063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.503288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.503324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.503463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.503495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.503748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.503821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.503987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.504024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.504278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.504315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.504518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.504550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.504728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.504759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.504956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.504989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.505211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.505245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.505382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.505412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.505523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.505551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.505690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.505720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.505852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.505882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.506058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.506090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.506275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.506308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.506486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.642 [2024-12-10 12:36:52.506527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.642 qpair failed and we were unable to recover it. 00:28:30.642 [2024-12-10 12:36:52.506704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.506734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.506859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.506890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.507013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.507043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.507331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.507367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.507545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.507575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.507712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.507741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.507951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.507982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.508169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.508202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.508343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.508372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.508651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.508682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.508800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.508830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.509100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.509131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.509336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1582b20 is same with the state(6) to be set 00:28:30.643 [2024-12-10 12:36:52.509653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.509740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.509980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.510021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.510166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.510200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.510385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.510418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.510549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.510585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.510721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.510753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.510968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.511004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.511285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.511320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.511500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.511533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.511743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.511777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.511955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.511985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.512102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.512132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.512347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.512378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.512507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.512536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.512652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.512683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.512879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.512911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.513090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.513122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.513383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.513416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.513680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.513711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.514008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.514040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.643 qpair failed and we were unable to recover it. 00:28:30.643 [2024-12-10 12:36:52.514257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.643 [2024-12-10 12:36:52.514289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.514516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.514547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.514672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.514703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.514830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.514860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.515044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.515075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.515251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.515283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.515410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.515441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.515596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.515636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.515752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.515783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.515929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.515960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.516214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.516252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.516386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.516418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.516629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.516661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.516861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.516892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.517008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.517039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.517306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.517337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.517624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.517656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.517969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.518000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.518271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.518303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.518531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.518562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.518685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.518717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.518935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.518967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.519170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.519202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.519346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.519380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.519568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.519598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.519929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.519961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.520148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.520207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.520472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.520503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.520624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.520656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.520882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.520914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.521039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.521069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.521269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.521301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.521573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.521604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.521821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.521852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.522000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.522037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.522150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.522191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.522327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.522357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.522493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.522522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.522695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.644 [2024-12-10 12:36:52.522726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.644 qpair failed and we were unable to recover it. 00:28:30.644 [2024-12-10 12:36:52.522898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.522929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.523101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.523132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.523417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.523450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.523721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.523751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.524046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.524078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.524362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.524395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.524618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.524649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.524946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.524978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.525246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.525280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.525424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.525456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.525630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.525663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.525783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.525815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.526082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.526114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.526327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.526359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.526530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.526562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.526735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.526765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.526944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.526974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.527193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.527226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.527428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.527460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.527662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.527694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.527957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.527989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.528235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.528269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.528536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.528574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.528705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.528737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.528984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.529017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.529135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.529176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.529374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.529405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.529604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.529634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.529848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.529878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.530149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.530191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.530397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.530427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.530569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.530599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.530800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.530831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.530952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.530984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.531109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.531140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.531400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.531433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.531619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.531651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.531877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.531908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.532117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.532148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.532371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.532404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.532608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.532639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.645 [2024-12-10 12:36:52.532756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.645 [2024-12-10 12:36:52.532787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.645 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.533054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.533085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.533359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.533393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.533500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.533530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.533727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.533757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.533945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.533976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.534156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.534197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.534398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.534430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.534625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.534657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.534874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.534906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.535103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.535135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.535332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.535363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.535554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.535584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.535780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.535810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.535987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.536019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.536218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.536251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.536444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.536476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.536671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.536703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.536913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.536945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.537154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.537197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.537378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.537408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.537591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.537621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.537852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.537885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.538075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.538105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.538258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.538291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.538519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.538549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.538823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.538855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.539030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.539061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.539327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.539360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.539537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.539568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.539825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.539856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.540042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.540073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.540319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.540351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.540650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.540682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.540893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.540925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.541122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.541153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.541368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.541400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.541671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.541703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.541900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.541930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.542103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.542135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.542339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.646 [2024-12-10 12:36:52.542371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.646 qpair failed and we were unable to recover it. 00:28:30.646 [2024-12-10 12:36:52.542510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.542541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.542674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.542705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.542880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.542911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.543085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.543115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.543426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.543459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.543725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.543755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.544056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.544087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.544208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.544240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.544442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.544479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.544688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.544719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.545031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.545063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.545335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.545366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.545529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.545560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.545741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.545771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.545976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.546007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.546185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.546216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.546474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.546506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.546626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.546656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.546853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.546885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.547003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.547034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.547241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.547273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.547415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.547446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.547639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.547668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.547892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.547924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.548143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.548185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.548321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.548351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.548468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.548499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.548793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.548824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.549031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.549062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.549189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.549221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.549423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.549454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.549661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.549693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.549885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.549916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.550215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.550248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.550451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.550482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.550659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.550696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.550914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.647 [2024-12-10 12:36:52.550945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.647 qpair failed and we were unable to recover it. 00:28:30.647 [2024-12-10 12:36:52.551070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.551100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.551307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.551338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.551612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.551644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.551938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.551969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.552144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.552184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.552300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.552329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.552537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.552569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.552778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.552808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.553044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.553075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.553267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.553299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.553477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.553509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.553621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.553651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.553783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.553813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.554018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.554050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.554303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.554336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.554539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.554570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.554764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.554795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.555033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.555064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.555293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.555326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.555475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.555505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.555697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.555727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.555912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.555943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.556062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.556093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.556316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.556348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.556524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.556555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.556684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.556720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.556857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.556889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.557101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.557131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.557350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.557382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.557502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.557532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.648 qpair failed and we were unable to recover it. 00:28:30.648 [2024-12-10 12:36:52.557835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.648 [2024-12-10 12:36:52.557867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.558063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.558095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.558223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.558255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.558381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.558413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.558541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.558571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.558701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.558732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.558854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.558885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.559064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.559094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.559369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.559402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.559587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.559619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.559745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.559776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.559906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.559937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.560140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.560180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.560292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.560323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.560451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.560482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.560607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.560639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.560769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.560800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.561083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.561115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.561340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.561372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.561493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.561524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.561776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.561807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.561988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.562020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.562244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.562277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.562464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.562496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.562683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.562714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.562997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.563028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.563229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.563263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.563401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.563432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.563638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.563670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.563987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.564018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.564135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.564175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.564323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.564354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.564538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.564568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.564777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.564807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.564932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.564964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.565146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.565199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.565344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.565378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.565606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.565637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.649 [2024-12-10 12:36:52.565885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.649 [2024-12-10 12:36:52.565917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.649 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.566188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.566219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.566497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.566528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.566735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.566766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.566944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.566975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.567246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.567279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.567480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.567511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.567709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.567742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.568014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.568045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.568346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.568378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.568509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.568541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.568739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.568770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.568904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.568936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.569136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.569175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.569430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.569461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.569593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.569625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.569758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.569789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.569919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.569950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.570088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.570119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.570278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.570310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.570446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.570478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.570618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.570648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.570847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.570879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.570995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.571026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.571143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.571202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.571351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.571388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.571589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.571621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.571854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.571885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.572138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.572181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.572494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.572526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.572653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.572685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.572881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.572913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.573041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.573072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.573326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.573360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.573597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.573629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.573809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.573842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.573968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.573999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.574135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.574174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.574304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.574334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.574487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.650 [2024-12-10 12:36:52.574519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.650 qpair failed and we were unable to recover it. 00:28:30.650 [2024-12-10 12:36:52.574632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.574662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.574844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.574876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.575071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.575100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.575255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.575286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.575474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.575506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.575708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.575740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.575881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.575913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.576128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.576187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.576381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.576413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.576538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.576571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.576709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.576743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.576931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.576963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.577147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.577198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.577380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.577411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.577589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.577619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.577937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.577970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.578169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.578203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.578409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.578441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.578647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.578678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.578932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.578963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.579248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.579282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.579415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.579444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.579645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.579676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.579883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.579913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.580115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.580145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.580433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.580465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.580674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.580704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.580901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.580931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.581127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.581166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.581299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.581330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.581590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.581622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.581745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.581776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.582066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.582098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.582232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.582263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.582483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.582513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.582656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.582687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.582825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.582857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.583134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.583189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.583376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.583407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.651 [2024-12-10 12:36:52.583586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.651 [2024-12-10 12:36:52.583616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.651 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.583738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.583770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.583994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.584025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.584231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.584263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.584383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.584415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.584620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.584652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.584991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.585022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.585177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.585210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.585395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.585427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.585549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.585578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.585711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.585744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.586027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.586059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.586182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.586215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.586335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.586366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.586516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.586547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.586809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.586841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.586954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.586985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.587219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.587251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.587359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.587388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.587510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.587541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.587732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.587764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.587978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.588009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.588196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.588227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.588431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.588462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.588644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.588674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.588853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.588884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.589023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.589055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.589238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.589271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.589507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.589540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.589674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.589704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.589823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.589855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.590035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.590067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.590243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.590275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.652 [2024-12-10 12:36:52.590460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.652 [2024-12-10 12:36:52.590490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.652 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.590668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.590699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.590896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.590928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.591143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.591196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.591402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.591433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.591685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.591715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.591846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.591878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.592054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.592084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.592285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.592323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.592602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.592634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.593034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.593066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.593293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.593325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.593484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.593515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.593654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.593685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.593796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.593826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.594029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.594061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.594296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.594328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.594544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.594576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.594705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.594735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.594948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.594980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.595124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.595154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.595292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.595322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.595577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.595608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.595745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.595777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.595954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.595986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.596173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.596205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.596386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.596415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.596601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.596634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.596853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.596883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.597169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.597202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.597399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.597431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.597571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.597602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.597843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.597875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.598080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.598111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.598237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.598268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.598386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.598421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.598551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.598584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.598767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.598797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.598921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.598952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.599131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.653 [2024-12-10 12:36:52.599190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.653 qpair failed and we were unable to recover it. 00:28:30.653 [2024-12-10 12:36:52.599370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.599402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.599533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.599562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.599699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.599729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.599934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.599965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.600264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.600299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.600483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.600516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.600826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.600856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.600976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.601008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.601217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.601250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.601448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.601480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.601753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.601785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.601987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.602019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.602143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.602184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.602389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.602421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.602597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.602628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.602940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.602972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.603099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.603131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.603326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.603358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.603611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.603641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.603780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.603811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.603987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.604017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.604201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.604234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.604431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.604468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.604649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.604680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.604802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.604833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.605090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.605122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.605340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.605373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.605550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.605581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.605715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.605745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.605857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.605891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.606008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.606039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.606177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.606209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.606346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.606379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.606645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.606676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.606802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.606833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.607021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.607051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.607206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.607240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.607446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.607477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.607600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.607630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.654 qpair failed and we were unable to recover it. 00:28:30.654 [2024-12-10 12:36:52.607813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.654 [2024-12-10 12:36:52.607843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.608029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.608060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.608184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.608218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.608342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.608372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.608575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.608607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.608925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.608957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.609180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.609215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.609397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.609428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.609563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.609594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.609784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.609816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.610001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.610033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.610324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.610357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.610575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.610607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.610825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.610857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.611107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.611138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.611373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.611405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.611583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.611615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.611865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.611897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.612152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.612197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.612376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.612407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.612636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.612667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.612869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.612900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.613095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.613126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.613352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.613386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.613670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.613748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.613982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.614018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.614321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.614356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.614641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.614674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.614954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.614985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.615111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.615143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.615359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.615391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.615594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.615625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.615832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.615865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.616115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.616147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.616291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.616322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.616505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.616536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.616806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.616837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.617046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.617078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.617286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.617320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.655 [2024-12-10 12:36:52.617579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.655 [2024-12-10 12:36:52.617610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.655 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.617819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.617851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.618044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.618076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.618338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.618371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.618649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.618682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.618884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.618915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.619134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.619173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.619357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.619388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.619532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.619564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.619789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.619820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.620014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.620048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.620230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.620263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.620455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.620486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.620665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.620696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.620833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.620865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.621091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.621122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.621389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.621422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.621629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.621660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.621862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.621894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.622167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.622200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.622491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.622523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.622692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.622724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.622910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.622943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.623096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.623128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.623405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.623483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.623641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.623688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.623976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.624008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.624190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.624223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.624370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.624402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.624525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.624555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.624742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.624775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.624924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.624955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.625151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.625208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.625390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.625420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.625619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.625651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.625961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.625991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.626222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.656 [2024-12-10 12:36:52.626255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.656 qpair failed and we were unable to recover it. 00:28:30.656 [2024-12-10 12:36:52.626386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.626421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.626543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.626575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.626805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.626837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.627023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.627054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.627345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.627376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.627503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.627535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.627658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.627689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.627974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.628006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.628282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.628315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.628524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.628555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.628758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.628790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.628922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.628952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.629225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.629257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.629384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.629414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.629595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.629626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.629757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.629793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.630068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.630098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.630289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.630322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.630592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.630627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.630832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.630863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.631039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.631070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.631253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.631284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.631465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.631495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.631681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.631712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.631917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.631949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.632069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.632101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.632295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.632327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.632536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.632568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.632743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.632772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.632997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.633028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.633232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.633265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.633514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.633545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.633740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.657 [2024-12-10 12:36:52.633771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.657 qpair failed and we were unable to recover it. 00:28:30.657 [2024-12-10 12:36:52.633951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.633982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.634094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.634124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.634404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.634437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.634617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.634647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.634923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.634954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.635153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.635196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.635397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.635429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.635551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.635582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.635789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.635819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.635927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.635964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.636154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.636196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.636475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.636506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.636812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.636843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.637109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.637140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.637346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.637378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.637637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.637667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.637860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.637892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.638144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.638185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.638403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.638435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.638614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.638645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.638764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.638795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.639068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.639098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.639261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.639294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.639410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.639442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.639648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.639679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.639856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.639887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.640156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.640206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.640437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.640468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.640646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.640677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.640866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.640897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.641095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.641126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.641422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.641454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.641581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.641612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.641905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.641936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.642220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.642252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.642514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.642545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.642751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.642782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.643059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.643091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.658 qpair failed and we were unable to recover it. 00:28:30.658 [2024-12-10 12:36:52.643345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.658 [2024-12-10 12:36:52.643377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.643515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.643546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.643668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.643699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.643901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.643932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.644188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.644220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.644399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.644430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.644624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.644655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.644938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.644969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.645151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.645192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.645400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.645431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.645688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.645719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.645899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.645930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.646187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.646220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.646402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.646433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.646691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.646721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.646927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.646958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.647231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.647263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.647466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.647497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.647700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.647731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.647917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.647948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.648200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.648232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.648431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.648466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.648763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.648795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.648914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.648945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.649223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.649255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.649446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.649478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.649660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.649691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.649825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.649857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.650129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.650166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.650349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.650380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.650652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.650682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.650790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.650821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.651073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.651104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.651291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.651323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.651502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.651533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.651822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.651853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.652136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.652176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.652360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.652392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.652668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.652698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.659 qpair failed and we were unable to recover it. 00:28:30.659 [2024-12-10 12:36:52.652827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.659 [2024-12-10 12:36:52.652864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.653113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.653144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.653443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.653474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.653769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.653801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.653945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.653975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.654179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.654212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.654407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.654437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.654718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.654749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.654947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.654978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.655198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.655232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.655530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.655561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.655741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.655771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.655948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.655978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.656177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.656210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.656412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.656443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.656715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.656746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.656953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.656983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.657183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.657215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.657349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.657380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.657584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.657614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.657755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.657786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.658001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.658031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.658332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.658363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.658635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.658666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.658959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.658989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.659264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.659296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.659598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.659629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.659819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.659856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.660030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.660061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.660341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.660376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.660697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.660728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.660977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.661008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.661187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.661218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.661423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.661455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.661660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.661691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.661984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.662016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.662295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.662327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.662538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.662569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.662766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.662797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.660 [2024-12-10 12:36:52.662914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.660 [2024-12-10 12:36:52.662946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.660 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.663126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.663165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.663309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.663341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.663477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.663507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.663691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.663722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.663991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.664022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.664226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.664258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.664442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.664473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.664738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.664768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.665017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.665049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.665258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.665290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.665490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.665520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.665639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.665670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.665882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.665914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.666091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.666121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.666326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.666358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.666636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.666667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.666800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.666830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.667007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.667038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.667214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.667246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.667543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.667574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.667885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.667916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.668199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.668232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.668517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.668548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.668771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.668802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.668910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.668941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.669211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.669243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.669358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.669389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.669512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.669543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.669657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.669688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.669862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.669893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.670093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.670125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.670400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.670433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.670720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.670752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.670873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.661 [2024-12-10 12:36:52.670904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.661 qpair failed and we were unable to recover it. 00:28:30.661 [2024-12-10 12:36:52.671181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.671215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.671492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.671522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.671734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.671765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.671891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.671922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.672124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.672155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.672307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.672339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.672542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.672573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.672769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.672800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.672987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.673018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.673292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.673325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.673658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.673690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.673908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.673938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.674231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.674264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.674467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.674498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.674749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.674779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.674966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.674998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.675274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.675308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.675494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.675525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.675701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.675732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.676014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.676044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.676256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.676288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.676588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.676625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.676911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.676943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.677195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.677228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.677527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.677558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.677866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.677897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.678076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.678106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.678294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.678326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.678517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.678547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.678666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.678697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.678886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.678917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.679096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.679126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.679413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.679446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.679763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.679795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.680046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.680076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.680264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.680297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.680442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.662 [2024-12-10 12:36:52.680473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.662 qpair failed and we were unable to recover it. 00:28:30.662 [2024-12-10 12:36:52.680655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.680685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.680879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.680911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.681212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.681246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.681489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.681519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.681813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.681845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.682119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.682150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.682290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.682321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.682502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.682532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.682727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.682758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.682960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.682990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.683174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.683206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.683470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.683507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.683712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.683744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.683921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.683951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.684133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.684186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.684407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.684438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.684694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.684724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.684908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.684939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.685139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.685184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.685457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.685488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.685683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.685714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.685929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.685961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.686170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.686203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.686457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.686488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.686761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.686792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.686978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.687009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.687150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.687194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.687391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.687421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.687621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.687654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.687918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.687953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.688244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.688280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.688552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.688585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.688783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.688816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.689025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.689057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.689247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.689290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.689548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.689578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.689781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.689812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.690096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.690134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.690412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.663 [2024-12-10 12:36:52.690456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.663 qpair failed and we were unable to recover it. 00:28:30.663 [2024-12-10 12:36:52.690690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.690722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.690997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.691030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.691212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.691245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.691530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.691562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.691841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.691873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.692200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.692234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.692414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.692445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.692566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.692598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.692774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.692806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.693022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.693053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.693315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.693349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.693605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.693636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.693814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.693846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.694238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.694315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.694623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.694661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.694945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.694980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.695272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.695308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.695492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.695524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.695722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.695753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.695956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.695988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.696184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.696218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.696401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.696432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.696716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.696748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.697034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.697067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.697268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.697303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.697532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.697562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.697743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.697784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.697914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.697945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.698286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.698322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.698528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.698564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.698849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.698882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.699086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.699118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.699252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.699289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.699518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.699549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.699736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.699767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.699944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.699974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.700263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.700298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.700526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.700557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.700808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.664 [2024-12-10 12:36:52.700839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.664 qpair failed and we were unable to recover it. 00:28:30.664 [2024-12-10 12:36:52.701041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.701072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.701209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.701244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.701427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.701460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.701643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.701674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.701864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.701896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.702074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.702104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.702232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.702267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.702550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.702584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.702780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.702811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.702957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.702987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.703260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.703294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.703482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.703513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.703717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.703746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.703867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.703898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.704195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.704231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.704524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.704557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.704816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.704847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.705155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.705194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.705456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.705491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.705792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.705823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.706088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.706118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.706305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.706338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.706513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.706544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.706719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.706750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.706869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.706899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.707179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.707212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.707470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.707501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.707757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.707795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.707929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.707961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.708081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.708113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.708370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.708403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.708707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.708739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.708918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.708951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.709132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.709185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.709317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.709349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.709625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.709656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.709926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.709956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.710213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.710248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.710455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.665 [2024-12-10 12:36:52.710496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.665 qpair failed and we were unable to recover it. 00:28:30.665 [2024-12-10 12:36:52.710638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.710669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.710957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.710991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.711128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.711167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.711425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.711462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.711644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.711679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.711970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.712005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.712210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.712249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.712451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.712483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.712662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.712694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.712871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.712901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.713076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.713107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.713293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.713324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.713538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.713570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.713814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.713845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.714047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.714077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.714454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.714532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.714836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.714871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.715112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.715146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.715444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.715476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.715605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.715637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.715830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.715862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.716068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.716101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.716388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.716422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.716623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.716654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.716764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.716796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.716980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.717012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.717211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.717244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.717441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.717473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.717651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.717692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.717970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.718002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.718204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.718237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.718416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.718448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.718723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.718755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.666 [2024-12-10 12:36:52.718885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.666 [2024-12-10 12:36:52.718916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.666 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.719119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.719151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.719414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.719445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.719656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.719687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.719903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.719934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.720066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.720099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.720260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.720293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.720546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.720578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.720777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.720807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.721099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.721131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.721378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.721410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.721676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.721709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.722006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.722037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.722266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.722299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.722482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.722514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.722722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.722753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.722956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.722987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.723264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.723296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.723498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.723531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.723712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.723743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.723937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.723969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.724170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.724203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.724453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.724530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.724793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.724868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.725197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.725235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.725427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.725459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.725638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.725670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.725891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.725922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.726100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.726131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.726341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.726375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.726495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.726526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.726778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.726810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.727077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.727108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.727318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.727350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.727531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.727564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.727839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.667 [2024-12-10 12:36:52.727880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.667 qpair failed and we were unable to recover it. 00:28:30.667 [2024-12-10 12:36:52.728070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.728101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.728299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.728332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.728557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.728589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.728724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.728755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.728964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.728996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.729179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.729212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.729340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.729371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.729641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.729672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.729917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.729948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.730130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.730171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.730380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.730412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.730661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.730692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.730953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.730985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.731289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.731323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.731589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.731620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.731903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.731934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.732222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.732254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.732481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.732512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.732788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.732819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.732939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.732969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.733144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.733184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.733388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.733418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.733598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.733629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.733840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.733871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.734067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.734098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.734299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.734330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.734545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.734602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.734855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.734893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.735105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.735137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.735418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.735455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.735738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.735772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.735982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.736017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.736143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.736187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.736409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.736443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.736635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.736671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.736851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.736884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.737075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.668 [2024-12-10 12:36:52.737113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.668 qpair failed and we were unable to recover it. 00:28:30.668 [2024-12-10 12:36:52.737317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.737354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.737571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.737604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.737785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.737822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.738021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.738059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.738264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.738303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.738566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.738599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.738829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.738862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.739095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.739128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.739375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.739411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.739674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.739707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.739944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.739977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.740212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.740249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.740408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.740440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.740646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.740678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.669 [2024-12-10 12:36:52.740818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.669 [2024-12-10 12:36:52.740873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.669 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.741174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.741207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.741394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.741436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.741718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.741755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.742026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.742058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.742343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.742380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.742533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.742563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.742751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.742782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.742975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.743006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.743298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.743330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.743537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.743568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.743690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.743721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.743922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.743954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.744132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.744170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.744355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.744386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.744566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.744597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.744785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.744816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.744993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.745025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.745301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.745334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.745530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.745560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.745760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.745791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.746048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.746079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.746278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.746310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.746506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.746536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.746735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.746765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.746973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.747004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.747193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.747226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.950 [2024-12-10 12:36:52.747404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.950 [2024-12-10 12:36:52.747434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.950 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.747694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.747725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.747964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.747995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.748220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.748251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.748520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.748551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.748848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.748880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.749006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.749038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.749307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.749338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.749539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.749571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.749869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.749900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.750039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.750070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.750200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.750233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.750444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.750475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.750656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.750687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.750811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.750842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.751115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.751152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.751369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.751401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.751577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.751609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.751820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.751851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.752049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.752079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.752259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.752312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.752593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.752623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.752744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.752775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.753048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.753078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.753258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.753289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.753467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.753498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.753790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.753821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.754077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.754107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.754424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.754456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.754706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.754738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.754958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.754989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.755192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.755223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.755477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.755508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.755795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.755825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.756109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.756140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.756348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.756380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.756500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.756532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.756744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.756774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.756973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.757003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.951 qpair failed and we were unable to recover it. 00:28:30.951 [2024-12-10 12:36:52.757299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.951 [2024-12-10 12:36:52.757332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.757604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.757636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.757855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.757885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.758071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.758103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.758313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.758345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.758519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.758551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.758750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.758781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.759032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.759063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.759246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.759278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.759550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.759581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.759776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.759806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.760081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.760112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.760301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.760333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.760549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.760581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.760780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.760809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.761011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.761043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.761194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.761232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.761436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.761469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.761683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.761714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.761930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.761961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.762233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.762265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.762449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.762480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.762727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.762757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.762948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.762977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.763168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.763200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.763408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.763439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.763706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.763737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.763934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.763965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.764232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.764264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.764395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.764425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.764553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.764585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.764866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.764897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.765200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.765234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.765415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.765447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.765639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.765670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.765891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.765923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.766098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.766128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.766404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.766437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.766563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.766593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.952 qpair failed and we were unable to recover it. 00:28:30.952 [2024-12-10 12:36:52.766869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.952 [2024-12-10 12:36:52.766900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.767181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.767214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.767502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.767534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.767813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.767843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.768184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.768260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.768468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.768505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.768713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.768745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.768867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.768899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.769119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.769151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.769343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.769376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.769573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.769604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.769779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.769811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.769986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.770017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.770144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.770185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.770364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.770396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.770507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.770539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.770726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.770757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.771031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.771073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.771357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.771389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.771532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.771564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.771815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.771847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.772028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.772060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.772190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.772222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.772493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.772525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.772703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.772734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.773007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.773040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.773266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.773299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.773478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.773510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.773765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.773797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.774013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.774045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.774248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.774281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.774418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.774450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.774644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.774675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.774952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.774984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.775280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.775313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.775582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.775614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.775910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.775941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.776216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.776248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.776528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.953 [2024-12-10 12:36:52.776560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.953 qpair failed and we were unable to recover it. 00:28:30.953 [2024-12-10 12:36:52.776851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.776882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.777165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.777199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.777417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.777449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.777663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.777695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.777974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.778006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.778311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.778344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.778554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.778586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.778841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.778872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.779061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.779093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.779268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.779300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.779532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.779563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.779743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.779774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.780045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.780077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.780303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.780335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.780611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.780642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.780931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.780962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.781242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.781275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.781551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.781582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.781795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.781834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.781983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.782016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.782132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.782184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.782443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.782475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.782681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.782713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.782844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.782875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.783075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.783107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.783297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.783330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.783534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.783565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.783766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.783798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.783993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.784024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.784296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.784329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.784452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.784484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.784761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.784793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.785006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.785039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.954 [2024-12-10 12:36:52.785334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.954 [2024-12-10 12:36:52.785366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.954 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.785545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.785578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.785853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.785885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.786129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.786169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.786431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.786462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.786749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.786781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.786965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.786997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.787218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.787251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.787542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.787574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.787764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.787795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.787978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.788010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.788211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.788244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.788446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.788478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.788731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.788763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.788951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.788983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.789185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.789217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.789492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.789524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.789633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.789665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.789885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.789917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.790127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.790176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.790455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.790487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.790692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.790723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.790854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.790886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.791013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.791044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.791181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.791214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.791515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.791553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.791790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.791823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.792146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.792190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.792464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.792496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.792776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.792808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.792945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.792976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.793155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.793201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.793409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.793441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.793618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.793649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.793826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.793858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.794137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.794179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.794479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.794511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.794769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.794800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.794946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.794977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.955 [2024-12-10 12:36:52.795293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.955 [2024-12-10 12:36:52.795327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.955 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.795603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.795634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.795883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.795915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.796183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.796216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.796516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.796548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.796819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.796850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.797051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.797082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.797284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.797317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.797585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.797617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.797875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.797906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.798109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.798141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.798359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.798392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.798573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.798606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.798879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.798955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.799248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.799286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.799596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.799629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.799812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.799843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.800115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.800147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.800350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.800383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.800657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.800688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.800978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.801009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.801288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.801321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.801549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.801580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.801710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.801741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.801948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.801979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.802098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.802129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.802383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.802470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.802803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.802839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.802974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.803007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.803215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.803248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.803540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.803573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.803845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.803876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.804094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.804126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.804408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.804440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.804720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.804752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.805046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.805078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.805351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.805384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.805648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.805680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.805888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.805919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.956 [2024-12-10 12:36:52.806137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.956 [2024-12-10 12:36:52.806177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.956 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.806372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.806410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.806693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.806725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.806857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.806888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.807173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.807206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.807507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.807538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.807795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.807827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.808054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.808085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.808267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.808300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.808494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.808526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.808805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.808836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.809041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.809073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.809327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.809360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.809556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.809589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.809791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.809828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.810106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.810143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.810365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.810397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.810576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.810607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.810817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.810849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.811027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.811059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.811264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.811297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.811573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.811605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.811887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.811919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.812039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.812070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.812247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.812279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.812546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.812579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.812760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.812791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.813009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.813040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.813155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.813196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.813486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.813519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.813695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.813725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.813977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.814008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.814266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.814301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.814497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.814529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.814826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.814858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.815061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.815093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.815226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.815259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.815539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.815571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.815749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.815781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.957 [2024-12-10 12:36:52.816061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.957 [2024-12-10 12:36:52.816092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.957 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.816356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.816390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.816722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.816799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.817104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.817140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.817352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.817384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.817568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.817599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.817896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.817927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.818107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.818138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.818409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.818442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.818720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.818751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.818882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.818913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.819112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.819143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.819355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.819387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.819587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.819618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.819896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.819928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.820071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.820112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.820417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.820449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.820658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.820689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.820940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.820971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.821188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.821221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.821400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.821431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.821628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.821660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.821841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.821871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.822143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.822192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.822383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.822413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.822594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.822625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.822826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.822857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.823134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.823175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.823304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.823335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.823596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.823627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.823805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.823835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.824035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.824067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.824200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.824232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.824429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.824459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.824589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.824619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.824799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.824830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.825108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.825139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.825273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.825304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.825501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.825532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.825728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.958 [2024-12-10 12:36:52.825759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.958 qpair failed and we were unable to recover it. 00:28:30.958 [2024-12-10 12:36:52.826009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.826041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.826322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.826354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.826634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.826667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.826952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.826983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.827268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.827299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.827501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.827533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.827732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.827763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.828036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.828067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.828246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.828278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.828457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.828487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.828755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.828786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.828967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.828998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.829271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.829302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.829506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.829537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.829715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.829746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.829923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.829965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.830143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.830186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.830375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.830406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.830584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.830614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.830867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.830897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.831117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.831149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.831360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.831392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.831643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.831674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.831856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.831887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.832090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.832120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.832252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.832285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.832465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.832496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.832697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.832728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.833031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.833062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.833331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.833365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.833616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.833647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.833848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.833878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.834091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.834122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.834307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.959 [2024-12-10 12:36:52.834339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.959 qpair failed and we were unable to recover it. 00:28:30.959 [2024-12-10 12:36:52.834543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.834574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.834758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.834788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.834963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.834994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.835116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.835147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.835290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.835320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.835494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.835526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.835801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.835832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.836118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.836148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.836315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.836348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.836528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.836560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.836748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.836779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.836974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.837005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.837212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.837246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.837500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.837530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.837840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.837870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.838050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.838082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.838306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.838339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.838521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.838551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.838672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.838703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.838978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.839009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.839189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.839221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.839431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.839468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.839743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.839774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.839974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.840005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.840256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.840288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.840491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.840522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.840795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.840826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.841009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.841039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.841319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.841352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.841674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.841706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.841897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.841928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.842153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.842212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.842515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.842547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.842796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.842827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.843147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.843189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.843471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.843503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.843787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.843818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.844103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.844134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.844325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.960 [2024-12-10 12:36:52.844357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.960 qpair failed and we were unable to recover it. 00:28:30.960 [2024-12-10 12:36:52.844566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.844597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.844865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.844896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.845091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.845122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.845382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.845416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.845744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.845776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.846048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.846080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.846311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.846344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.846532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.846562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.846685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.846716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.846844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.846876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.847155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.847197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.847404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.847437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.847635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.847666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.847845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.847877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.848151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.848192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.848388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.848420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.848722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.848753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.848965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.848996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.849244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.849277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.849456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.849488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.849599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.849629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.849807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.849838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.850045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.850082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.850364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.850398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.850677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.850708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.850999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.851030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.851214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.851246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.851450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.851481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.851758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.851789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.852079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.852111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.852228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.852260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.852467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.852499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.852705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.852736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.852859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.852891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.853093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.853124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.853385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.853418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.853646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.853677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.853952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.853983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.854187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.854220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.961 qpair failed and we were unable to recover it. 00:28:30.961 [2024-12-10 12:36:52.854398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.961 [2024-12-10 12:36:52.854429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.854605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.854637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.854908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.854940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.855131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.855170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.855283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.855314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.855567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.855597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.855717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.855748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.855924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.855956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.856228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.856280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.856556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.856587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.856860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.856897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.857082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.857113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.857393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.857425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.857714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.857744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.857850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.857881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.858129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.858179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.858316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.858349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.858551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.858582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.858779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.858810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.858936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.858967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.859243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.859275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.859563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.859594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.859795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.859826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.860021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.860052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.860261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.860294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.860419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.860450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.860627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.860658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.860873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.860905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.861118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.861149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.861280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.861312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.861506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.861536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.861786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.861818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.861947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.861978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.862239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.862271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.862395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.862426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.862653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.862684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.862886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.862918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.863180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.863213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.863498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.863530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.962 qpair failed and we were unable to recover it. 00:28:30.962 [2024-12-10 12:36:52.863830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.962 [2024-12-10 12:36:52.863860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.863993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.864024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.864208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.864241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.864418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.864449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.864648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.864679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.864790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.864820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.865007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.865038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.865308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.865339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.865539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.865570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.865832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.865862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.866202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.866234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.866430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.866467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.866695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.866726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.866983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.867014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.867199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.867231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.867431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.867461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.867647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.867678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.867932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.867962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.868213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.868245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.868425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.868456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.868712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.868743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.868951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.868982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.869254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.869287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.869465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.869496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.869674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.869705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.869915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.869946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.870124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.870154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.870379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.870411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.870605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.870635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.870834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.870865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.871139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.871178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.871371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.871403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.871512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.871542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.871670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.871701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.871979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.963 [2024-12-10 12:36:52.872010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.963 qpair failed and we were unable to recover it. 00:28:30.963 [2024-12-10 12:36:52.872212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.872245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.872455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.872486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.872668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.872699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.872826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.872856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.873057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.873087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.873281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.873313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.873489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.873521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.873652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.873683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.873939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.873970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.874238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.874270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.874478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.874508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.874780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.874811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.875018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.875049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.875223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.875256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.875528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.875559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.875768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.875799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.875930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.875967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.876168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.876202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.876382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.876413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.876590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.876622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.876820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.876850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.877024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.877054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.877336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.877368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.877644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.877675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.877897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.877927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.878181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.878214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.878489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.878520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.878642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.878674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.878947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.878978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.879187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.879219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.879344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.879375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.879636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.879667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.879847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.879878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.880082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.880114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.880403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.880436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.880659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.880690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.880945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.880976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.881201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.881234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.881430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.964 [2024-12-10 12:36:52.881461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.964 qpair failed and we were unable to recover it. 00:28:30.964 [2024-12-10 12:36:52.881641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.881672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.881952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.881984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.882276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.882308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.882580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.882611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.882904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.882936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.883214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.883245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.883531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.883562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.883848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.883879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.884168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.884200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.884486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.884517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.884792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.884824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.885120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.885151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.885406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.885437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.885641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.885671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.885849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.885880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.885999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.886030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.886305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.886339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.886514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.886551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.886726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.886757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.887024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.887056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.887264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.887296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.887571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.887603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.887783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.887813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.888085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.888117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.888320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.888352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.888628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.888659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.888955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.888986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.889192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.889225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.889418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.889449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.889575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.889605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.889868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.889911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.890107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.890138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.890348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.890379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.890653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.890684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.890976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.891007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.891147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.891190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.891395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.891426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.891549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.891579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.891703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.965 [2024-12-10 12:36:52.891734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.965 qpair failed and we were unable to recover it. 00:28:30.965 [2024-12-10 12:36:52.891850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.891879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.892057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.892088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.892364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.892397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.892721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.892753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.893023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.893053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.893351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.893384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.893659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.893690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.893895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.893925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.894055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.894087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.894278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.894310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.894593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.894624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.894902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.894933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.895226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.895258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.895514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.895545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.895793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.895824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.896126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.896168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.896458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.896489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.896668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.896699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.896975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.897010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.897135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.897188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.897373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.897403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.897679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.897710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.897989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.898019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.898308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.898339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.898619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.898651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.898935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.898965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.899251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.899283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.899568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.899599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.899860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.899891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.900084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.900115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.900399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.900431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.900639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.900671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.900787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.900818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.901073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.901104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.901233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.901265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.901441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.901473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.901668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.901698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.901832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.901864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.902118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.966 [2024-12-10 12:36:52.902149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.966 qpair failed and we were unable to recover it. 00:28:30.966 [2024-12-10 12:36:52.902342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.902373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.902625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.902657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.902854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.902885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.903084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.903115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.903375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.903407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.903532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.903562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.903840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.903872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.904049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.904079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.904355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.904388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.904591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.904622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.904822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.904852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.905031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.905063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.905271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.905304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.905414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.905445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.905667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.905698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.905906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.905938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.906150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.906193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.906372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.906403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.906676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.906707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.906915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.906952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.907137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.907176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.907304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.907336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.907611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.907642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.907938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.907969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.908247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.908280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.908392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.908423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.908613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.908643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.908864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.908895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.909198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.909231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.909415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.909448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.909676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.909707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.909959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.909990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.910183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.910215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.910422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.910454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.910717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.910748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.910871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.910903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.911079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.911111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.911405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.911437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.911742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.967 [2024-12-10 12:36:52.911772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.967 qpair failed and we were unable to recover it. 00:28:30.967 [2024-12-10 12:36:52.911880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.911911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.912092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.912122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.912265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.912298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.912551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.912583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.912694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.912726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.913006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.913037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.913236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.913269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.913529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.913561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.913752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.913782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.913907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.913939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.914155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.914196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.914324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.914356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.914587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.914618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.914799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.914830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.915033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.915064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.915184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.915217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.915466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.915497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.915606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.915637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.915757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.915788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.915912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.915943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.916064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.916100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.916297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.916330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.916590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.916621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.916872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.916902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.917131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.917188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.917367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.917399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.917652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.917683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.917935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.917965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.918193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.918225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.918499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.918531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.918653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.918683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.918935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.918966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.919092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.919124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.919424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.968 [2024-12-10 12:36:52.919457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.968 qpair failed and we were unable to recover it. 00:28:30.968 [2024-12-10 12:36:52.919606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.919637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.919759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.919790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.919899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.919930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.920208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.920240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.920513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.920544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.920837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.920868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.921150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.921200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.921495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.921526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.921780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.921812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.922017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.922047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.922296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.922328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.922466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.922497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.922714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.922745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.922929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.922961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.923170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.923204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.923311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.923343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.923561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.923592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.923841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.923873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.924092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.924123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.924313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.924345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.924546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.924577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.924701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.924732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.924840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.924870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.925063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.925094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.925296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.925329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.925580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.925611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.925811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.925848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.926045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.926075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.926204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.926237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.926443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.926474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.926653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.926684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.926861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.926892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.927076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.927107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.927334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.927367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.927493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.927523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.927725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.927756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.927961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.927990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.928200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.928232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.928413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.969 [2024-12-10 12:36:52.928445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.969 qpair failed and we were unable to recover it. 00:28:30.969 [2024-12-10 12:36:52.928709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.928739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.928954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.928986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.929196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.929228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.929407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.929439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.929616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.929646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.929846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.929877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.929983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.930013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.930266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.930299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.930520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.930552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.930732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.930762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.930938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.930969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.931179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.931212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.931495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.931526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.931667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.931698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.931919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.931951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.932216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.932248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.932499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.932531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.932708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.932740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.933015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.933046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.933332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.933364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.933648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.933680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.933962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.933992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.934284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.934316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.934498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.934530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.934713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.934744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.934868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.934900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.935044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.935074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.935253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.935292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.935475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.935505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.935700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.935731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.936012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.936043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.936337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.936369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.936592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.936622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.936899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.936929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.937057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.937088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.937287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.937319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.937499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.937531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.937727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.937758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.937960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.970 [2024-12-10 12:36:52.937990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.970 qpair failed and we were unable to recover it. 00:28:30.970 [2024-12-10 12:36:52.938181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.938214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.938493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.938524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.938662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.938693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.938890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.938922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.939189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.939222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.939496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.939528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.939649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.939679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.939965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.939996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.940252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.940284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.940409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.940441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.940638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.940668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.940783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.940815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.940948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.940979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.941234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.941266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.941543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.941575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.941906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.941982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.942269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.942307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.942592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.942626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.942772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.942804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.943031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.943064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.943317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.943350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.943546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.943578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.943755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.943787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.944038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.944071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.944250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.944283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.944466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.944497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.944802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.944834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.945015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.945048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.945332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.945375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.945570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.945602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.945778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.945810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.946007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.946040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.946150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.946195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.946468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.946500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.946683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.946715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.946994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.947027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.947332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.947365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.947643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.947675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.947877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.971 [2024-12-10 12:36:52.947910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.971 qpair failed and we were unable to recover it. 00:28:30.971 [2024-12-10 12:36:52.948102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.948133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.948268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.948300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.948513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.948544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.948879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.948911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.949139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.949181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.949492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.949524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.949825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.949856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.950073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.950106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.950329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.950362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.950542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.950574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.950819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.950851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.951078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.951110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.951322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.951358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.951635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.951667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.951849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.951884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.952139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.952185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.952383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.952415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.952595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.952626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.952801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.952833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.953009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.953040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.953224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.953257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.953518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.953549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.953732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.953763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.953888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.953920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.954043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.954073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.954256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.954288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.954488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.954520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.954700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.954730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.954927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.954959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.955177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.955217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.955350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.955381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.955561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.955592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.955773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.955805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.956068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.956099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.972 [2024-12-10 12:36:52.956233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.972 [2024-12-10 12:36:52.956266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.972 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.956471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.956503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.956648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.956680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.956816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.956846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.957034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.957066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.957245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.957278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.957474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.957504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.957753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.957784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.958062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.958093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.958298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.958331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.958580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.958611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.958729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.958760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.959044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.959075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.959351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.959383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.959588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.959618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.959829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.959860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.960040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.960072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.960377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.960409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.960594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.960625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.960804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.960834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.961130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.961183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.961456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.961487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.961735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.961817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.962062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.962098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.962302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.962345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.962503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.962535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.962678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.962712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.963033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.963066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.963281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.963314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.963495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.963527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.963713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.963744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.963875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.963906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.964182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.964215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.964521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.964556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.964766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.964797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.965037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.965069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.965213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.965246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.965450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.965480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.965627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.965659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.973 [2024-12-10 12:36:52.965860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.973 [2024-12-10 12:36:52.965890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.973 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.966025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.966056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.966243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.966281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.966468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.966499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.966749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.966786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.967089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.967124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.967411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.967447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.967650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.967682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.967937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.967971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.968183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.968221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.968489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.968530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.968729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.968767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.969053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.969088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.969229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.969261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.969463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.969495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.969709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.969741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.969941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.969980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.970191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.970227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.970412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.970446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.970626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.970667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.970851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.970889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.971026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.971057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.971184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.971221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.971384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.971416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.971707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.971738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.971922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.971955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.972189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.972223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.972465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.972496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.972767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.972798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.972982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.973024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.973309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.973344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.973560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.973592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.973721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.973751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.973968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.973998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.974222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.974258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.974386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.974417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.974632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.974666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.974853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.974892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.975086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.975117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.975401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.975437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.974 qpair failed and we were unable to recover it. 00:28:30.974 [2024-12-10 12:36:52.975631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.974 [2024-12-10 12:36:52.975664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.975784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.975815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.976064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.976095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.976284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.976319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.976525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.976559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.976768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.976800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.977004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.977038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.977234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.977267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.977476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.977508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.977714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.977748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.977875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.977905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.978156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.978210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.978406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.978438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.978706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.978737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.979006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.979037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.979234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.979267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.979447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.979479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.979758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.979789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.980041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.980072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.980339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.980371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.980577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.980609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.980731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.980762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.980938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.980969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.981156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.981196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.981417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.981453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.981571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.981601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.981786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.981818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.981935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.981965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.982217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.982248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.982453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.982485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.982674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.982705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.982908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.982939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.983196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.983231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.983511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.983542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.983860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.983892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.984142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.984193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.984380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.984411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.984547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.984584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.984831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.984908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.975 [2024-12-10 12:36:52.985127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.975 [2024-12-10 12:36:52.985183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.975 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.985464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.985503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.985786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.985818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.986007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.986039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.986244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.986276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.986399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.986429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.986635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.986665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.986845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.986874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.987010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.987040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.987236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.987269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.987546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.987577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.987706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.987737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.988024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.988064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.988258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.988290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.988422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.988451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.988702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.988733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.988932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.988963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.989177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.989209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.989413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.989443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.989643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.989673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.989789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.989820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.989941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.989971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.990178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.990209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.990402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.990435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.990614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.990645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.990785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.990817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.991097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.991129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.991442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.991475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.991688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.991721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.992002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.992033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.992223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.992256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.992378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.992410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.992613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.992645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.992786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.992816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.992945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.992976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.976 [2024-12-10 12:36:52.993167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.976 [2024-12-10 12:36:52.993200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.976 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.993323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.993354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.993541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.993573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.993754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.993786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.994118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.994210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.994467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.994505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.994636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.994668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.994947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.994979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.995188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.995222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.995428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.995460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.995638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.995668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.995947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.995978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.996202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.996236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.996422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.996453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.996672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.996702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.996884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.996915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.997112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.997142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.997270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.997312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.997535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.997566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.997763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.997795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.997976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.998007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.998184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.998217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.998486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.998517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.998701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.998732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.998933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.998963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.999176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.999209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.999336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.999367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.999560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.999591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:52.999798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:52.999828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.000012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:53.000044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.000223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:53.000256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.000541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:53.000572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.000900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:53.000932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.001136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:53.001178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.001458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:53.001489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.001688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:53.001719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.001922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:53.001953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.002249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:53.002282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.002544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.977 [2024-12-10 12:36:53.002575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.977 qpair failed and we were unable to recover it. 00:28:30.977 [2024-12-10 12:36:53.002926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.002957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.003135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.003177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.003427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.003458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.003753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.003783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.003971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.004002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.004207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.004241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.004438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.004468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.004681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.004713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.004922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.004953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.005135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.005176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.005384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.005415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.005614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.005644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.005845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.005876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.006064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.006094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.006380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.006412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.006675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.006707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.006886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.006917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.007090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.007120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.007398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.007436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.007559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.007591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.007867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.007897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.008042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.008074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.008258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.008290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.008401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.008432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.008616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.008647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.008813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.008844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.009020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.009050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.009312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.009344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.009524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.009555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.009749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.009780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.010059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.010090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.010224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.010256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.010379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.010412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.010693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.010723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.010842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.010873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.011075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.011105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.011305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.011339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.011531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.011563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.011766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.978 [2024-12-10 12:36:53.011797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.978 qpair failed and we were unable to recover it. 00:28:30.978 [2024-12-10 12:36:53.011978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.012008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.012205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.012237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.012441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.012472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.012654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.012685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.012888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.012918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.013042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.013073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.013369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.013453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.013663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.013700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.013857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.013890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.014000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.014031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.014238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.014271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.014421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.014455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.014571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.014605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.014724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.014756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.014962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.014999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.015133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.015186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.015376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.015408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.015601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.015645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.015763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.015794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.015905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.015937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.016155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.016200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.016381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.016416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.016533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.016564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.016768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.016803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.017006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.017038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.017172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.017208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.017436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.017469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.017649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.017682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.017870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.017907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.018090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.018125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.018318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.018353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.018606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.018639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.018830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.018865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.019051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.019090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.019290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.019325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.019504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.019536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.019657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.019691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.019867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.019903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.020046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.020078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.979 [2024-12-10 12:36:53.020296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.979 [2024-12-10 12:36:53.020331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.979 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.020455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.020488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.020665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.020701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.020908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.020940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.021193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.021226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.021342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.021373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.021574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.021608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.021730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.021763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.021889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.021929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.022135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.022175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.022394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.022429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.022550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.022585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.022714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.022746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.022856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.022886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.023065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.023100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.023293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.023326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.023504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.023536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.023765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.023801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.023922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.023953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.024170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.024202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.024387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.024417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.024647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.024685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.024871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.024910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.025091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.025121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.025311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.025344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.025539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.025570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.025767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.025798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.025981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.026015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.026134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.026190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.026380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.026411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.026663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.026693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.026818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.026849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.027028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.027061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.027186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.027221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.027443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.027474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.027755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.027786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.027992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.028024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.028155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.028195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.028394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.028425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.028635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.980 [2024-12-10 12:36:53.028669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.980 qpair failed and we were unable to recover it. 00:28:30.980 [2024-12-10 12:36:53.028863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.028894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.029016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.029048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.029239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.029271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.029450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.029481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.029665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.029699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.029897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.029929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.030133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.030185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.030303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.030334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.030537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.030580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.030902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.030934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.031111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.031144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.031444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.031477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.031602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.031634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.031814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.031846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.032121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.032153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.032357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.032389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.032569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.032601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.032784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.032826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.033010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.033042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.033320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.033353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.033643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.033675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.033851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.033881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.033993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.034024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.034303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.034337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.034535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.034568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.034690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.034722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.034838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.034870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.035045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.035076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.035263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.035296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.035490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.035522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.035639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.035670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.035868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.035900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.036033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.036069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.036261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.036295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.036494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.981 [2024-12-10 12:36:53.036526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.981 qpair failed and we were unable to recover it. 00:28:30.981 [2024-12-10 12:36:53.036702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.036734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.036942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.036980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.037181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.037214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.037393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.037424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.037713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.037746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.037876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.037908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.038083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.038114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.038391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.038425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.038555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.038587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.038711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.038742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.038860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.038891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.039014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.039046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.039182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.039222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.039361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.039391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.039511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.039549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.039726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.039757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.039937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.039969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.040143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.040186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.040392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.040424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.040627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.040660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.040838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.040869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.040985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.041016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.041217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.041251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.041436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.041469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.041592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.041624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.041827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.041866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.042073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.042107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.042307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.042339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.042461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.042492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.042619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.042650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.042759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.042791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.042912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.042945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.043156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.043204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.043326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.043360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.043580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.043621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.043806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.043839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.044012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.044043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.044336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.044369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.044545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.044575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.982 [2024-12-10 12:36:53.044694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.982 [2024-12-10 12:36:53.044733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.982 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.044996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.045030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.045240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.045289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.045417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.045451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.045667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.045699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.045912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.045945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.046119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.046150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.046430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.046463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.046583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.046613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.046799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.046831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.046949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.046980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.047178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.047210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.047386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.047417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.047594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.047626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.047849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.047879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.048075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.048107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.048319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.048351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.048458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.048489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.048665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.048696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.048885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.048917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.049111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.049142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.049266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.049296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.049438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.049470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.049653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.049684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.049858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.049889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.050008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.050039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.050152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.050195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.050319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.050351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.050548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.050578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.050849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.050886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.051060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.051092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.051210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.051243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.051350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.051381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.051505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.051537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.051728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.051760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.051869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.051901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.052090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.052122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.052332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.052365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.052541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.052571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.052744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.983 [2024-12-10 12:36:53.052775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.983 qpair failed and we were unable to recover it. 00:28:30.983 [2024-12-10 12:36:53.052955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.052988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.053234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.053266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.053376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.053407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.053579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.053654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.053804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.053840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.053973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.054005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.054206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.054241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.054419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.054449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.054569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.054618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.054735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.054765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.054937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.054967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.055088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.055119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.055303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.055336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.055507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.055537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.055685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.055715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.055828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.055859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.056028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.056068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.056264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.056295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.056468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.056498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.056619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.056652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.056825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.056855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.057030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.057061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.057242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.057275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.057485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.057516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.057709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.057742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.057910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.057942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.058117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.058150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.058342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.058374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.058551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.058584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.058850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.058881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.059084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.059116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.059255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.059288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.059466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.059498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.059763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.059794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.059971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.060002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.060124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.060155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.060343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.060375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.060574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.060604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.060735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.060767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.984 qpair failed and we were unable to recover it. 00:28:30.984 [2024-12-10 12:36:53.060943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.984 [2024-12-10 12:36:53.060974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.061100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.061132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.061327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.061359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.061559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.061591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.061825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.061901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.062126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.062180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.062389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.062423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.062684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.062716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.062840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.062871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.063065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.063097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.063280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.063314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.063489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.063521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.063695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.063727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.063900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.063931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.064115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.064147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.064394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.064426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.064554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.064584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.064760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.064807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.065009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.065041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.065217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.065250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.065430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.065462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.065578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.065610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.065800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.065831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.065969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.066000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.066179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.066213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.066460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.066491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.066611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.066643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.066821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.066853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.067073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.067104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.067231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.067263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.067371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.067402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.067590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.067621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.067748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.067780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.067966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.067998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.068176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.985 [2024-12-10 12:36:53.068210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.985 qpair failed and we were unable to recover it. 00:28:30.985 [2024-12-10 12:36:53.068390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.068421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.068610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.068642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.068830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.068861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.069036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.069067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.069281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.069314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.069503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.069535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.069706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.069737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.069859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.069889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.070062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.070094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.070285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.070318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.070522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.070553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.070742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.070774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.071033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.071065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.071193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.071225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.071439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.071471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.071649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.071680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.071806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.071837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.071941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.071971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.072085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.072117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.072245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.072277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.072478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.072509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.072632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.072663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.072836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.072873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.072990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.073021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.073194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.073227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.073334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.073366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.073553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.073584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.073851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.073882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.074175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.074208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.074398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.074430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.074602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.074634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.074814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.074847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.075088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.075120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.075338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.075370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.075491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.075522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.075628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.075659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.075842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.075874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.076058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.076090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.076209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.986 [2024-12-10 12:36:53.076242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.986 qpair failed and we were unable to recover it. 00:28:30.986 [2024-12-10 12:36:53.076418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.076450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.076627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.076658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.076771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.076802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.076919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.076950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.077070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.077103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.077297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.077329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.077599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.077631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.077755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.077787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.077894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.077926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.078095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.078126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.078303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.078376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.078525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.078560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.078743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.078776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.078905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.078937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.079133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.079180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.079293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.079325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.079522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.079552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.079719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.079751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.079872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.079905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.080094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.080125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.080327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.080359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.080536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.080567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.080761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.080792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.080909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.080940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.081055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.081086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.081205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.081238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.081440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.081470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.081657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.081688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.081879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.081911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.082178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.082211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.082320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.082350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.082541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.082572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.082743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.082774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.082944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.082975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.083149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.083192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.083432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.083464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.083672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.083704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.083809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.083847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.084020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.987 [2024-12-10 12:36:53.084052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.987 qpair failed and we were unable to recover it. 00:28:30.987 [2024-12-10 12:36:53.084239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.084271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.084443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.084474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.084587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.084618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.084828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.084860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.084982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.085013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.085207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.085262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.085379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.085410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.085526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.085557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.085725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.085756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.085936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.085968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.086088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.086119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.086230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.086263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.086380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.086411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.086673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.086704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.086815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.086846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.086946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.086977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.087147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.087190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.087306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.087336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.087508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.087539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.087656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.087686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.087852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.087882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.087989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.088020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.088228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.088260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.088382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.088413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.088600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.088631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.088749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.088786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.088952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.088983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.089262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.089296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.089420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.089451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.089566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.089597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.089857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.089888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.090057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.090089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.090232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.090264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.090380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.090411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.090582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.090614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.090725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.090756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.090880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.090910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.091023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.091054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.091225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.091258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.988 [2024-12-10 12:36:53.091445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.988 [2024-12-10 12:36:53.091478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.988 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.091604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.091635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.091756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.091787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.091978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.092009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.092205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.092239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.092461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.092492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.092734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.092764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.092983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.093014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.093213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.093245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.093440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.093471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.093664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.093697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.093872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.093903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:30.989 [2024-12-10 12:36:53.094091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:30.989 [2024-12-10 12:36:53.094121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:30.989 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.094347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.094386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.094556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.094586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.094756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.094786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.094895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.094926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.095218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.095249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.095440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.095471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.095658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.095690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.095871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.095901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.096074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.096106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.096315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.096348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.096543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.096575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.096786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.096818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.097009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.097043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.097260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.097293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.097469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.097540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.272 [2024-12-10 12:36:53.097764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.272 [2024-12-10 12:36:53.097800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.272 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.098058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.098090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.098231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.098263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.098467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.098499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.098601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.098632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.098821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.098851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.098963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.098994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.099106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.099138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.099325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.099357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.099473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.099504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.099682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.099714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.099837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.099867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.099972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.100019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.100138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.100180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.100285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.100316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.100488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.100518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.100697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.100728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.100849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.100879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.101067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.101098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.101283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.101314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.101506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.101536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.101654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.101684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.101816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.101847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.102081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.102111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.102309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.102341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.102461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.102492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.102615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.102645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.102764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.102796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.102912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.102942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.103145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.103187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.103381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.103412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.103526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.103556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.103674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.103706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.103888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.273 [2024-12-10 12:36:53.103919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.273 qpair failed and we were unable to recover it. 00:28:31.273 [2024-12-10 12:36:53.104181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.104214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.104321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.104352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.104460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.104490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.104614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.104645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.104812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.104843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.105076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.105148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.105309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.105347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.105467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.105500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.105699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.105729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.105901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.105933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.106104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.106135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.106392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.106424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.106595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.106626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.106739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.106770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.106891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.106922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.107113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.107145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.107341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.107373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.107489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.107520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.107635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.107676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.107802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.107834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.108003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.108034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.108150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.108194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.108368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.108400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.108580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.108611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.108725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.108756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.108869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.108901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.109022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.109052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.109180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.109213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.109384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.109415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.109583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.109615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.109880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.109912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.110095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.110126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.110329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.110366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.274 qpair failed and we were unable to recover it. 00:28:31.274 [2024-12-10 12:36:53.110575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.274 [2024-12-10 12:36:53.110608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.110709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.110740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.110859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.110890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.111010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.111041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.111230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.111263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.111471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.111502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.111628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.111659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.111829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.111860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.112026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.112057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.112240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.112273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.112471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.112502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.112671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.112701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.112910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.112981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.113122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.113181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.113369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.113401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.113577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.113607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.113723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.113754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.113995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.114025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.114196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.114229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.114400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.114430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.114558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.114590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.114708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.114739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.114853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.114884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.115002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.115033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.115203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.115236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.115341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.115373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.115553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.115583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.115684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.115715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.115883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.115913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.116035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.116065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.116264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.116296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.116464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.116494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.116610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.116640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.116813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.116844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.117009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.275 [2024-12-10 12:36:53.117040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.275 qpair failed and we were unable to recover it. 00:28:31.275 [2024-12-10 12:36:53.117154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.117196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.117406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.117436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.117642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.117673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.117792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.117822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.117940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.117976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.118145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.118189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.118303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.118334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.118439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.118469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.118603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.118634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.118735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.118765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.118943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.118974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.119178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.119210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.119333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.119364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.119655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.119686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.119802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.119833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.120028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.120059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.120188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.120220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.120389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.120419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.120531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.120563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.120678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.120709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.120892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.120923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.121089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.121120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.121332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.121364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.121536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.121566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.121704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.121735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.121943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.121974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.122194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.122226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.122396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.122427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.122626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.122656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.122758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.122789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.122891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.122922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.123087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.123123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.123300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.276 [2024-12-10 12:36:53.123332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.276 qpair failed and we were unable to recover it. 00:28:31.276 [2024-12-10 12:36:53.123541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.123571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.123707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.123738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.123925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.123955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.124214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.124245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.124353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.124384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.124564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.124594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.124698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.124729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.124853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.124884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.125003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.125034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.125154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.125193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.125307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.125338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.125453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.125483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.125687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.125719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.125896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.125927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.126046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.126076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.126259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.126291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.126472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.126503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.126688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.126718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.126824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.126855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.127048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.127080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.127275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.127306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.127418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.127449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.127655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.127685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.127880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.127911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.128025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.128056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.128222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.128259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.128396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.128427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.128539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.277 [2024-12-10 12:36:53.128570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.277 qpair failed and we were unable to recover it. 00:28:31.277 [2024-12-10 12:36:53.128675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.128705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.128872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.128903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.129194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.129226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.129466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.129498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.129609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.129639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.129835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.129866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.130051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.130082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.130288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.130320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.130488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.130518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.130686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.130717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.130827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.130857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.131034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.131065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.131232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.131263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.131450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.131480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.131586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.131616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.131819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.131851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.132092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.132122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.132249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.132281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.132451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.132482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.132604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.132634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.132826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.132857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.133042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.133074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.133195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.133227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.133398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.133430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.133532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.133562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.133736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.133768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.133945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.133975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.134167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.134199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.134370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.134401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.134515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.134546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.134748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.134779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.134947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.134978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.135240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.135271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.135374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.278 [2024-12-10 12:36:53.135405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.278 qpair failed and we were unable to recover it. 00:28:31.278 [2024-12-10 12:36:53.135607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.135637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.135818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.135849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.135952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.135983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.136109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.136140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.136262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.136294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.136464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.136493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.136669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.136700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.136869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.136900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.137142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.137184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.137305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.137335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.137504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.137536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.137702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.137733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.137940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.137970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.138168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.138201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.138379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.138409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.138615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.138646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.138815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.138845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.139082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.139112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.139254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.139287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.139526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.139556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.139745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.139774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.139940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.139969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.140138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.140178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.140297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.140327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.140513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.140544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.140722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.140752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.140989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.141020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.141213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.141246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.141415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.141445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.141610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.141642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.141743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.141773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.141876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.141914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.142100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.142131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.279 [2024-12-10 12:36:53.142373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.279 [2024-12-10 12:36:53.142443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.279 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.142654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.142689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.142820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.142852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.143026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.143059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.143265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.143301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.143498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.143529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.143658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.143689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.143808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.143838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.144006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.144038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.144150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.144193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.144360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.144392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.144559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.144589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.144764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.144796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.144907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.144937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.145051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.145082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.145213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.145244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.145347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.145379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.145481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.145511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.145713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.145743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.145911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.145941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.146206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.146238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.146354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.146385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.146492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.146522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.146637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.146668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.146783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.146814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.146983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.147015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.147139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.147183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.147302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.147333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.147506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.147536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.147704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.147735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.147867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.147897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.148006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.148037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.148222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.148254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.280 [2024-12-10 12:36:53.148392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.280 [2024-12-10 12:36:53.148423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.280 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.148590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.148622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.148735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.148767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.148945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.148976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.149100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.149132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.149319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.149358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.149564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.149596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.149702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.149732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.149854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.149884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.150000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.150031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.150152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.150195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.150386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.150416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.150650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.150681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.150796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.150826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.150994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.151024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.151265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.151297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.151403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.151434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.151536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.151567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.151682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.151713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.151850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.151882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.152070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.152100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.152303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.152336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.152451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.152481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.152678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.152709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.152823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.152853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.152951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.152983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.153179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.153211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.153330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.153360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.153472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.153504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.153711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.153742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.153912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.153945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.154114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.154146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.154415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.154447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.154552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.281 [2024-12-10 12:36:53.154581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.281 qpair failed and we were unable to recover it. 00:28:31.281 [2024-12-10 12:36:53.154700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.154731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.154902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.154933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.155101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.155131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.155383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.155453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.155645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.155680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.155947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.155979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.156110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.156141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.156337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.156369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.156537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.156567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.156807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.156838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.156942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.156974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.157083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.157124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.157347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.157379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.157575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.157607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.157731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.157762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.157883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.157915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.158023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.158054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.158181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.158214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.158333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.158364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.158532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.158563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.158671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.158701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.158883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.158915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.159037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.159068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.159253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.159284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.159492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.159523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.159706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.159737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.159906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.159937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.160104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.282 [2024-12-10 12:36:53.160134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.282 qpair failed and we were unable to recover it. 00:28:31.282 [2024-12-10 12:36:53.160312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.160344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.160519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.160549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.160659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.160689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.160802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.160833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.161011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.161042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.161171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.161203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.161319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.161350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.161464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.161494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.161599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.161630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.161797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.161828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.162073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.162144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.162365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.162410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.162540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.162572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.162680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.162711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.162944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.162976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.163144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.163185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.163354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.163385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.163569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.163601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.163769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.163799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.163918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.163950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.164123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.164155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.164333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.164364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.164530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.164562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.164731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.164770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.164884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.164915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.165020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.165052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.165220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.165252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.165513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.165545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.165663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.165694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.165891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.165922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.166089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.166121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.166248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.166281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.283 [2024-12-10 12:36:53.166450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.283 [2024-12-10 12:36:53.166482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.283 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.166582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.166613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.166812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.166844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.167013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.167045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.167174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.167206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.167314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.167345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.167510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.167541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.167652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.167683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.167874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.167907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.168068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.168098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.168294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.168349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.168545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.168577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.168749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.168780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.168954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.168989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.169186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.169219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.169393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.169425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.169533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.169564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.169692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.169723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.169872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.169943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.170086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.170121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.170314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.170349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.170527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.170560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.170748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.170779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.170960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.170992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.171186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.171219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.171391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.171422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.171525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.171556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.171660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.171691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.171795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.171825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.172086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.172118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.172299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.172331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.172517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.172549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.172729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.172760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.172874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.172906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.284 [2024-12-10 12:36:53.173069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.284 [2024-12-10 12:36:53.173099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.284 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.173301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.173334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.173517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.173548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.173720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.173751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.173858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.173889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.174009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.174040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.174226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.174259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.174427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.174458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.174614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.174645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.174767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.174798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.174916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.174946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.175111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.175149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.175269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.175301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.175469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.175500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.175601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.175632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.175805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.175836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.176081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.176111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.176293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.176326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.176429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.176460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.176647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.176679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.176848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.176879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.177078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.177118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.177318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.177350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.177475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.177506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.177673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.177704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.177828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.177859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.178050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.178081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.178248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.178280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.178387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.178418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.178630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.178660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.178773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.178804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.178919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.178949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.179119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.179150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.179343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.179373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.285 [2024-12-10 12:36:53.179537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.285 [2024-12-10 12:36:53.179569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.285 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.179733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.179764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.179865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.179897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.180059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.180090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.180199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.180237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.180340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.180370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.180544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.180575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.180744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.180775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.180943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.180975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.181174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.181207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.181379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.181410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.181532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.181562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.181676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.181707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.181823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.181854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.182025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.182056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.182221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.182255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.182513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.182543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.182652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.182684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.182809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.182841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.182959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.182990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.183106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.183137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.183345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.183378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.183478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.183508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.183675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.183706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.183817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.183848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.184053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.184084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.184208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.184239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.184360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.184391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.184558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.184589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.184754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.184785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.184978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.185010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.185176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.185215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.185395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.185426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.185541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.185571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.286 [2024-12-10 12:36:53.185684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.286 [2024-12-10 12:36:53.185715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.286 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.185887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.185918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.186123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.186153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.186333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.186364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.186481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.186510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.186678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.186708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.186815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.186845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.187020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.187051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.187232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.187264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.187375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.187406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.187577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.187609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.187783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.187814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.187989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.188021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.188147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.188189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.188430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.188461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.188579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.188610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.188728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.188766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.188939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.188969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.189137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.189195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.189376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.189407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.189700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.189733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.189900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.189932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.190053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.190086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.190200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.190232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.190339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.190373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.190534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.190566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.190684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.190716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.190892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.190925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.191192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.191226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.191398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.191435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.191563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.191601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.191798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.191829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.192127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.192168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.192286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.287 [2024-12-10 12:36:53.192328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.287 qpair failed and we were unable to recover it. 00:28:31.287 [2024-12-10 12:36:53.192593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.192627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.192835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.192866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.192981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.193012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.193153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.193222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.193407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.193477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.193614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.193650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.193756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.193787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.193965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.193996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.194178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.194211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.194399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.194429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.194607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.194637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.194800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.194831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.195095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.195125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.195235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.195267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.195570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.195601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.195712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.195743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.195940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.195971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.196086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.196127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.196303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.196336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.196449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.196480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.196576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.196606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.196851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.196882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.197101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.197131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.197291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.197368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.197647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.197717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.197842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.197879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.197995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.198026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.198300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.198334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.288 qpair failed and we were unable to recover it. 00:28:31.288 [2024-12-10 12:36:53.198452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.288 [2024-12-10 12:36:53.198483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.198601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.198631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.198802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.198834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.199029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.199061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.199266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.199299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.199421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.199451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.199567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.199598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.199706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.199738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.199960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.199990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.200180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.200213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.200332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.200362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.200473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.200504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.200691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.200722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.200907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.200938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.201039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.201069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.201309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.201342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.201457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.201494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.201678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.201708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.201929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.201960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.202073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.202105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.202232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.202264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.202438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.202468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.202637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.202667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.202768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.202797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.203030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.203060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.203231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.203263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.203456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.203488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.203674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.203705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.203878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.203909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.204101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.204134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.204367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.204437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.204577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.204611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.204817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.204847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.205047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.289 [2024-12-10 12:36:53.205079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.289 qpair failed and we were unable to recover it. 00:28:31.289 [2024-12-10 12:36:53.205181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.205213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.205310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.205341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.205461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.205492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.205606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.205636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.205801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.205833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.206004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.206034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.206287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.206318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.206490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.206521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.206692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.206722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.206836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.206874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.206976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.207006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.207268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.207300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.207465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.207496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.207663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.207694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.207877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.207908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.208037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.208067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.208193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.208224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.208336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.208368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.208469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.208499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.208673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.208704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.208871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.208902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.209084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.209114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.209304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.209336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.209511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.209543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.209781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.209812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.209981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.210012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.210180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.210213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.210382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.210413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.210540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.210571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.210770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.210800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.211004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.211035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.211274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.211306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.211478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.290 [2024-12-10 12:36:53.211508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.290 qpair failed and we were unable to recover it. 00:28:31.290 [2024-12-10 12:36:53.211618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.211649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.211755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.211786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.211894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.211925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.212055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.212098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.212233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.212268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.212431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.212463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.212655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.212687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.212792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.212824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.213026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.213056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.213180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.213213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.213336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.213368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.213475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.213506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.213694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.213724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.213855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.213886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.214001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.214032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.214218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.214251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.214437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.214478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.214605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.214639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.214748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.214779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.214889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.214920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.215124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.215156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.215350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.215381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.215500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.215532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.215658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.215689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.215807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.215838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.215964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.215995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.216174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.216206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.216376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.216407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.216582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.216613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.216790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.216821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.216949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.216980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.217109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.217140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.291 qpair failed and we were unable to recover it. 00:28:31.291 [2024-12-10 12:36:53.217393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.291 [2024-12-10 12:36:53.217425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.217603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.217634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.217745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.217776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.217876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.217907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.218074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.218106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.218230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.218263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.218394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.218424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.218600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.218631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.218823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.218854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.218955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.218985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.219179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.219212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.219339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.219370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.219546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.219577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.219767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.219798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.219965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.219997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.220171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.220203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.220391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.220422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.220593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.220624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.220792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.220822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.220937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.220968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.221091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.221123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.221333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.221365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.221486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.221517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.221630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.221661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.221830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.221867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.222049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.222081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.222273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.222305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.222476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.222508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.222689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.222720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.222905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.222936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.223039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.223070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.223242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.223275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.223375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.223406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.292 [2024-12-10 12:36:53.223577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.292 [2024-12-10 12:36:53.223608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.292 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.223812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.223843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.224044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.224076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.224245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.224278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.224398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.224430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.224545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.224577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.224686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.224717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.224817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.224848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.225048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.225079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.225185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.225217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.225416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.225448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.225552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.225583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.225868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.225900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.226069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.226101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.226275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.226308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.226416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.226445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.226563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.226593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.226717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.226747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.226938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.226986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.227179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.227213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.227327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.227357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.227472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.227504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.227640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.227670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.227784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.227815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.227987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.228018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.228137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.228178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.228308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.293 [2024-12-10 12:36:53.228338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.293 qpair failed and we were unable to recover it. 00:28:31.293 [2024-12-10 12:36:53.228445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.228476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.228579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.228609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.228737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.228767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.228890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.228920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.229086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.229123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.229241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.229271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.229377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.229407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.229516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.229547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.229659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.229690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.229801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.229832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.230016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.230046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.230237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.230269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.230440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.230470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.230641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.230672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.230845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.230875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.230980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.231010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.231181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.231213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.231330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.231360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.231553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.231583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.231701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.231731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.231830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.231860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.232030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.232062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.232259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.232291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.232400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.232429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.232592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.232624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.232792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.232822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.232921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.232952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.233142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.233182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.233352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.233383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.233622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.233652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.233836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.233866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.234068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.234121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.234329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.234379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.294 qpair failed and we were unable to recover it. 00:28:31.294 [2024-12-10 12:36:53.234560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.294 [2024-12-10 12:36:53.234601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.234753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.234783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.234900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.234931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.235129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.235170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.235290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.235321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.235452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.235482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.235728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.235759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.235931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.235960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.236073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.236104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.236322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.236354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.236527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.236556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.236832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.236862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.236989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.237020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.237122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.237152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.237290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.237322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.237424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.237456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.237579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.237610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.237713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.237744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.237910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.237940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.238132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.238174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.238347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.238377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.238541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.238571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.238814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.238844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.239015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.239045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.239173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.239205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.239380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.239411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.239665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.239695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.239887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.239917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.240018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.240048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.240222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.240254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.240372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.240402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.240570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.240601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.240786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.240817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.295 [2024-12-10 12:36:53.240920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.295 [2024-12-10 12:36:53.240950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.295 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.241067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.241097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.241222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.241254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.241421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.241451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.241642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.241673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.241837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.241873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.241985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.242016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.242123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.242152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.242271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.242303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.242469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.242499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.242597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.242628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.242803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.242835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.243017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.243047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.243247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.243278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.243448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.243478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.243652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.243682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.243784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.243815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.243963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.243994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.244184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.244216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.244439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.244470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.244639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.244669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.244856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.244886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.245148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.245190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.245374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.245404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.245517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.245547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.245715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.245746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.245981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.246011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.246184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.246216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.246389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.246420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.246536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.246566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.246752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.246783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.246988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.247019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.247148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.247188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.247435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.296 [2024-12-10 12:36:53.247466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.296 qpair failed and we were unable to recover it. 00:28:31.296 [2024-12-10 12:36:53.247567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.247598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.247714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.247744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.247915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.247946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.248131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.248171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.248348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.248378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.248547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.248577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.248747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.248777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.248891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.248922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.249090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.249121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.249260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.249292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.249494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.249524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.249629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.249665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.249871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.249902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.250068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.250099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.250212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.250245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.250434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.250464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.250564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.250595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.250724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.250755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.250950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.250980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.251128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.251169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.251355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.251386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.251501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.251531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.251702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.251733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.251845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.251875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.252038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.252069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.252197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.252230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.252431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.252462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.252628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.252658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.252823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.252853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.252967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.252997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.253090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.253121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.253234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.253267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.253535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.297 [2024-12-10 12:36:53.253565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.297 qpair failed and we were unable to recover it. 00:28:31.297 [2024-12-10 12:36:53.253670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.253701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.253847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.253878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.254079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.254110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.254328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.254360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.254531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.254562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.254747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.254778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.254893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.254923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.255037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.255067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.255317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.255350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.255540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.255572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.255740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.255771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.255877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.255907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.256087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.256118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.256366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.256399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.256583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.256614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.256786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.256816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.256920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.256950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.257117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.257147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.257254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.257291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.257460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.257491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.257606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.257636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.257821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.257852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.257971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.258002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.258103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.258134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.258313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.258345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.258533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.258564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.258665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.258696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.258960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.298 [2024-12-10 12:36:53.258992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.298 qpair failed and we were unable to recover it. 00:28:31.298 [2024-12-10 12:36:53.259095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.259126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.259269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.259301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.259475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.259506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.259671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.259702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.259817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.259849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.260052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.260082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.260186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.260218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.260321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.260352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.260515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.260547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.260648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.260679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.260859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.260890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.261127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.261190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.261312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.261344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.261464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.261494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.261593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.261623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.261793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.261823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.262026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.262057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.262237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.262269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.262398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.262429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.262594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.262624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.262789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.262820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.262941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.262972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.263256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.263288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.263388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.263419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.263663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.263694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.263819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.263849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.264010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.264041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.264153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.264195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.264302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.264333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.264496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.264526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.264714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.264751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.264955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.264985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.265111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.265142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.265321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.299 [2024-12-10 12:36:53.265351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.299 qpair failed and we were unable to recover it. 00:28:31.299 [2024-12-10 12:36:53.265525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.265556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.265749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.265779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.265991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.266022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.266167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.266200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.266366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.266397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.266575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.266604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.266802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.266834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.266999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.267030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.267143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.267184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.267365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.267395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.267597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.267628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.267808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.267838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.268016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.268047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.268214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.268246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.268435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.268465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.268567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.268597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.268714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.268745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.268929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.268959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.269072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.269106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.269328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.269361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.269481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.269511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.269627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.269658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.269825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.269855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.269978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.270009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.270177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.270210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.270338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.270369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.270536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.270566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.270770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.270802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.270913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.270943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.271045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.271076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.271198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.271229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.271339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.271370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.271541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.271571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.300 qpair failed and we were unable to recover it. 00:28:31.300 [2024-12-10 12:36:53.271743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.300 [2024-12-10 12:36:53.271774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.271891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.271922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.272182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.272214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.272367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.272408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.272523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.272554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.272720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.272751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.272920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.272951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.273124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.273155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.273293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.273324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.273459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.273490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.273668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.273699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.273847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.273877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.274064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.274095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.274262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.274294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.274460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.274491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.274657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.274688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.274869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.274899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.275087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.275118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.275301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.275332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.275437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.275467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.275637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.275667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.275833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.275864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.275998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.276029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.276151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.276193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.276359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.276389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.276554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.276584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.276754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.276785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.276967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.276997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.277205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.277238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.277433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.277464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.277575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.277606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.277796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.277828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.277951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.277981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.301 [2024-12-10 12:36:53.278152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.301 [2024-12-10 12:36:53.278195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.301 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.278394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.278426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.278537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.278566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.278672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.278702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.278867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.278898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.278995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.279025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.279188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.279220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.279418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.279449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.279709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.279740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.279912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.279942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.280109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.280145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.280266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.280297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.280404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.280434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.280601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.280631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.280798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.280829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.280932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.280963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.281152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.281191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.281360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.281390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.281607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.281638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.281764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.281794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.281907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.281938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.282060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.282091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.282203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.282234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.282402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.282432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.282551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.282582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.282771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.282801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.282968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.282999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.283180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.283213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.283473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.283504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.283671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.283701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.283865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.283896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.284166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.284197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.284393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.284424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.284531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.284562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.302 qpair failed and we were unable to recover it. 00:28:31.302 [2024-12-10 12:36:53.284748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.302 [2024-12-10 12:36:53.284778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.285005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.285035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.285204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.285236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.285429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.285461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.285661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.285691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.285791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.285822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.286014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.286045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.286179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.286210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.286406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.286436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.286552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.286582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.286785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.286814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.286928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.286959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.287067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.287098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.287272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.287303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.287570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.287601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.287810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.287841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.287946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.287982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.288153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.288196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.288364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.288396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.288580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.288611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.288745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.288775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.288895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.288926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.289028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.289059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.289172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.289204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.289382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.289412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.289578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.289608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.289787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.289818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.289984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.303 [2024-12-10 12:36:53.290014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.303 qpair failed and we were unable to recover it. 00:28:31.303 [2024-12-10 12:36:53.290125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.290155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.290344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.290375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.290550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.290581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.290703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.290733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.290857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.290887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.291076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.291106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.291281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.291312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.291429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.291459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.291621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.291652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.291772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.291801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.291914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.291945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.292068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.292098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.292271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.292303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.292470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.292500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.292675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.292705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.292883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.292951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.293086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.293123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.293242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.293273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.293511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.293542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.293663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.293694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.293875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.293906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.294025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.294056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.294176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.294209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.294378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.294408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.294520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.294550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.294810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.294840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.295035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.295066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.295279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.295310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.295483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.295514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.295642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.295673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.295863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.295893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.296086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.296116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.296257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.296289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.296480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.304 [2024-12-10 12:36:53.296511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.304 qpair failed and we were unable to recover it. 00:28:31.304 [2024-12-10 12:36:53.296627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.296656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.296809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.296840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.296956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.296987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.297187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.297219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.297417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.297448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.297620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.297652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.297821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.297851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.297953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.297983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.298177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.298216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.298384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.298415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.298601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.298631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.298797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.298829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.299010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.299040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.299152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.299191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.299305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.299337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.299456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.299488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.299652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.299683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.299851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.299882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.300140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.300180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.300356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.300387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.300557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.300587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.300755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.300786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.300957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.300988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.301103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.301133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.301251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.301282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.301464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.301494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.301774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.301804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.302065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.302096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.302265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.302298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.302419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.302449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.302639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.302669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.302932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.302964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.303130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.303171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.305 [2024-12-10 12:36:53.303341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.305 [2024-12-10 12:36:53.303373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.305 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.303493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.303524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.303712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.303748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.303859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.303890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.304063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.304095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.304285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.304319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.304490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.304521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.304720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.304751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.304951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.304981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.305098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.305129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.305332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.305365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.305493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.305524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.305642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.305673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.305851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.305881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.305997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.306028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.306214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.306245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.306421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.306453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.306691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.306721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.306890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.306921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.307126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.307167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.307269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.307301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.307428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.307459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.307658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.307688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.307857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.307888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.308058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.308089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.308280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.308312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.308521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.308552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.308793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.308823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.308936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.308967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.309154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.309206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.309468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.309500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.309634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.309664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.309769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.309800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.309918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.306 [2024-12-10 12:36:53.309949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.306 qpair failed and we were unable to recover it. 00:28:31.306 [2024-12-10 12:36:53.310117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.310148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.310348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.310380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.310490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.310521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.310756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.310786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.310953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.310985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.311253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.311285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.311406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.311437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.311627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.311658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.311777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.311808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.311931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.311962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.312138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.312180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.312367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.312399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.312635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.312665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.312834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.312865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.312980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.313011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.313140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.313206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.313333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.313364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.313476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.313507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.313623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.313654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.313765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.313796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.313913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.313944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.314045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.314074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.314269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.314302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.314436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.314466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.314566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.314596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.314834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.314866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.315030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.315060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.315230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.315262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.315370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.315401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.315531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.315562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.315673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.315704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.315878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.315909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.316020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.316050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.316221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.307 [2024-12-10 12:36:53.316253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.307 qpair failed and we were unable to recover it. 00:28:31.307 [2024-12-10 12:36:53.316416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.316448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.316644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.316674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.316849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.316880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.316993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.317023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.317205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.317236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.317419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.317449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.317555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.317585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.317753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.317785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.317952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.317983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.318149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.318202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.318395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.318426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.318686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.318716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.318833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.318863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.319001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.319031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.319135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.319175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.319359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.319390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.319585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.319615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.319781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.319811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.319923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.319953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.320135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.320175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.320342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.320372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.320569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.320601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.320778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.320807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.320978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.321009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.321111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.321141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.321341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.321373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.321493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.321524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.321711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.321741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.321845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.321875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.322044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.322080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.322183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.322216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.322383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.322413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.322607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.322639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.308 [2024-12-10 12:36:53.322804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.308 [2024-12-10 12:36:53.322835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.308 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.323074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.323105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.323296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.323328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.323441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.323472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.323575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.323606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.323812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.323844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.323964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.323995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.324105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.324136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.324252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.324284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.324457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.324488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.324686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.324716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.324886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.324917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.325045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.325076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.325244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.325275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.325446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.325477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.325643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.325674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.325792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.325823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.325948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.325979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.326147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.326189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.326313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.326343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.326450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.326481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.326580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.326610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.326796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.326827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.326994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.327031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.327149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.327190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.327313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.327344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.327461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.327491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.327604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.327635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.327750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.309 [2024-12-10 12:36:53.327780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.309 qpair failed and we were unable to recover it. 00:28:31.309 [2024-12-10 12:36:53.327973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.328003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.328104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.328134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.328315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.328346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.328515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.328545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.328744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.328775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.328977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.329008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.329108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.329139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.329334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.329367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.329492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.329523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.329688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.329718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.329888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.329918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.330016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.330047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.330184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.330216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.330323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.330354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.330589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.330620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.330725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.330757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.330921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.330952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.331050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.331081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.331317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.331349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.331527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.331557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.331749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.331780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.331948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.331985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.332109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.332140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.332315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.332346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.332537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.332568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.332803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.332833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.332951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.332982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.333106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.333137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.333377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.333409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.333615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.333645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.333812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.333842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.334011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.334043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.334209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.310 [2024-12-10 12:36:53.334241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.310 qpair failed and we were unable to recover it. 00:28:31.310 [2024-12-10 12:36:53.334482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.334513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.334627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.334658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.334777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.334807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.334936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.334967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.335133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.335172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.335340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.335370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.335482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.335513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.335750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.335780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.335960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.335991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.336092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.336122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.336244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.336277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.336378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.336408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.336601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.336631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.336737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.336768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.336934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.336965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.337088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.337118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.337334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.337366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.337531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.337562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.337730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.337762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.337941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.337974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.338173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.338206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.338445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.338476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.338594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.338628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.338791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.338822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.339030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.339061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.339228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.339260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.339427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.339457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.339560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.339591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.339757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.339787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.339976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.340012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.340266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.340299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.340408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.340439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.340547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.311 [2024-12-10 12:36:53.340578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.311 qpair failed and we were unable to recover it. 00:28:31.311 [2024-12-10 12:36:53.340689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.340720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.340894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.340924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.341092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.341123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.341320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.341352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.341468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.341499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.341615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.341645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.341755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.341785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.341966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.341997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.342194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.342226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.342339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.342369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.342486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.342517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.342631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.342662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.342773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.342803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.342968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.342999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.343101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.343131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.343283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.343353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.343587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.343656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.343853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.343888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.344007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.344039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.344242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.344276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.344450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.344481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.344596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.344628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.344793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.344822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.344992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.345032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.345134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.345177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.345293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.345324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.345439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.345469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.345641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.345672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.345887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.345918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.346082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.346113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.346247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.346279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.346477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.346508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.346694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.346724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.312 qpair failed and we were unable to recover it. 00:28:31.312 [2024-12-10 12:36:53.346884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.312 [2024-12-10 12:36:53.346918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.347127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.347166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.347339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.347372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.347541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.347571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.347744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.347775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.347893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.347923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.348021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.348052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.348174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.348206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.348329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.348360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.348530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.348560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.348680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.348711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.348887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.348917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.349114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.349144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.349329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.349360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.349510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.349541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.349785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.349815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.350001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.350033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.350223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.350255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.350358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.350389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.350568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.350599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.350712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.350743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.350893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.350923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.351099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.351131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.351251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.351284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.351451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.351481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.351583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.351614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.351799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.351831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.351946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.351978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.352145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.352188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.352404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.352435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.352593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.352630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.352816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.352848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.352957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.352989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.313 qpair failed and we were unable to recover it. 00:28:31.313 [2024-12-10 12:36:53.353101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.313 [2024-12-10 12:36:53.353132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.353384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.353416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.353690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.353721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.353965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.353997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.354118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.354149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.354328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.354360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.354461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.354493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.354610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.354641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.354767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.354799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.354915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.354946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.355130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.355178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.355381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.355413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.355602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.355632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.355822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.355853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.356089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.356119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.356304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.356336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.356543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.356573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.356677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.356708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.356896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.356926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.357093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.357124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.357263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.357296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.357490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.357521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.357634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.357665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.357833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.357865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.358001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.358032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.358148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.358191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.358307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.314 [2024-12-10 12:36:53.358338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.314 qpair failed and we were unable to recover it. 00:28:31.314 [2024-12-10 12:36:53.358471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.358501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.358606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.358636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.358805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.358836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.358943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.358973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.359075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.359106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.359251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.359283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.359453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.359484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.359588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.359619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.359788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.359818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.359988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.360019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.360253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.360292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.360403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.360434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.360615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.360646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.360761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.360793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.360904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.360935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.361106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.361137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.361315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.361346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.361517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.361547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.361667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.361698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.361813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.361844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.362037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.362068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.362182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.362213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.362385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.362415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.362527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.362558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.362679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.362711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.362814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.362844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.362956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.362986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.363099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.363131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.363256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.315 [2024-12-10 12:36:53.363286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.315 qpair failed and we were unable to recover it. 00:28:31.315 [2024-12-10 12:36:53.363469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.363500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.363611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.363641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.363807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.363838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.363953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.363984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.364167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.364201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.364403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.364433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.364546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.364577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.364705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.364735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.364857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.364889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.365109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.365139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.365353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.365385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.365557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.365587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.365691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.365722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.365911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.365941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.366147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.366190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.366357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.366387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.366493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.366524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.366693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.366723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.366840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.366871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.367041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.367070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.367192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.367224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.367435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.367472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.367640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.367669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.367851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.367881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.368054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.368084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.368235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.368267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.368474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.368504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.368602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.368633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.368829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.368859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.369047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.369077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.369194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.369229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.369339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.369370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.369536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.369565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.369737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.369768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.369970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.369999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.370226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.370257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.370359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.370389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.370596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.370626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.370794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.370825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.370929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.370959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.371056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.371086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.371288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.371320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.371487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.371517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.371682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.371713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.316 qpair failed and we were unable to recover it. 00:28:31.316 [2024-12-10 12:36:53.371881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.316 [2024-12-10 12:36:53.371910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.372019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.372049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.372148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.372189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.372355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.372385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.372523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.372554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.372677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.372708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.372810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.372840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.372951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.372981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.373169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.373201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.373368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.373398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.373570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.373600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.373705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.373735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.373850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.373880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.374050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.374079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.374227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.374258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.374359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.374389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.374493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.374524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.374638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.374673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.374775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.374805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.374989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.375020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.375193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.375223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.375328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.375358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.375530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.375561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.375729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.375759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.375860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.375890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.376074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.376104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.376260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.376312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.376423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.376453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.376551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.376581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.376684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.376714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.376879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.376909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.377109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.377139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.377262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.377293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.377464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.377494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.377684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.377714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.377817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.377847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.378019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.378049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.378172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.378204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.378440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.378471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.378575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.378606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.378723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.378753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.378921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.378951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.379055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.379085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.379255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.379286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.379460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.379491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.379750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.379780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.379979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.380009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.317 [2024-12-10 12:36:53.380186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.317 [2024-12-10 12:36:53.380217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.317 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.380336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.380367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.380483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.380513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.380696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.380727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.380894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.380924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.381100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.381131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.381325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.381356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.381538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.381568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.381760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.381790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.381983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.382013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.382211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.382247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.382436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.382466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.382633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.382663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.382781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.382811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.382914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.382944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.383052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.383083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.383273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.383305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.383421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.383452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.383574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.383605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.383712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.383742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.383909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.383940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.384054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.384085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.384253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.384285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.384471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.384502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.384693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.384723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.384891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.384921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.385088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.385118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.385248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.385280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.385409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.385439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.385540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.385569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.385737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.385767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.385876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.385906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.386006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.386036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.386153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.386194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.386306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.386336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.386567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.386597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.386834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.386864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.386972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.387002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.387104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.387134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.387381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.387411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.318 qpair failed and we were unable to recover it. 00:28:31.318 [2024-12-10 12:36:53.387512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.318 [2024-12-10 12:36:53.387542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.387671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.387701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.387902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.387933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.388034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.388065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.388236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.388267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.388381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.388412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.388582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.388613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.388804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.388834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.389071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.389101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.389225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.389257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.389360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.389522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.389552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.389718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.389748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.389879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.389910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.390176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.390208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.390409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.390439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.390604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.390634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.390798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.390828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.390948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.390978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.391078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.391107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.391303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.391334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.391573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.391602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.391706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.391736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.391847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.391877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.392000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.392030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.392133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.392194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.392309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.392338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.392440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.392470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.392594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.392625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.392727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.392757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.392873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.392903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.393087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.393118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.393240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.393272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.393447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.393477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.393675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.393705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.393944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.393974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.394219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.394250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.394367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.394398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.394506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.394536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.394713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.394744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.394856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.394886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.395006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.395037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.395170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.395201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.395319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.395350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.395518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.395548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.395650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.395681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.395795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.395825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.319 qpair failed and we were unable to recover it. 00:28:31.319 [2024-12-10 12:36:53.396012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.319 [2024-12-10 12:36:53.396042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.396169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.396202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.396311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.396342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.396505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.396541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.396783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.396813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.396915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.396945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.397067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.397097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.397298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.397329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.397498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.397528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.397712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.397742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.397876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.397907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.398007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.398037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.398219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.398251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.398418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.398448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.398550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.398580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.398818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.398849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.399033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.399063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.399174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.399207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.399399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.399429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.399627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.399658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.399823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.399853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.399980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.400011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.400123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.400152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.400287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.400317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.400428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.400458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.400625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.400656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.400755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.400785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.400986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.401015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.401210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.401241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.401430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.401460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.401612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.401682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.401900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.401934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.402100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.402131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.402287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.402320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.402439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.402471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.402647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.402677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.402781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.402812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.402915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.402946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.403114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.403144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.403274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.403305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.403515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.403546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.403724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.403754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.404011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.404041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.404154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.404196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.404382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.404412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.404579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.404609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.404728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.404759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.404923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.404953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.405146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.405189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.405323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.405354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.320 qpair failed and we were unable to recover it. 00:28:31.320 [2024-12-10 12:36:53.405466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.320 [2024-12-10 12:36:53.405496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.405623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.405654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.405853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.405884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.405989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.406020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.406235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.406268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.406437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.406468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.406768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.406798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.406981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.407015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.407140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.407179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.407286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.407316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.407482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.407513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.407680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.407711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.407879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.407909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.408104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.408136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.408277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.408308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.408436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.408467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.408648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.408679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.408871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.408902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.409081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.409119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.409432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.409469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.409642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.409673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.409876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.409919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.410192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.410228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.410422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.410452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.410621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.410652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.410834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.410865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.411026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.411057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.411236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.411281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.411471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.411517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.411641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.411673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.411839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.411870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.412048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.412080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.412379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.412417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.412588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.412619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.412876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.412907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.413019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.413049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.413241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.413273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.413442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.413478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.413694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.413730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.413834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.413864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.321 [2024-12-10 12:36:53.414051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.321 [2024-12-10 12:36:53.414080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.321 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.414285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.414318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.414490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.414520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.414703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.414734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.414900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.414931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.415101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.415144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.415332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.415375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.415521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.415572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.415849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.415896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.416183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.416227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.416408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.416441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.416654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.416688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.416870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.416903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.417082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.417116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.417266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.417300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.417497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.417531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.417742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.417776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.417902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.417935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.418107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.418138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.418335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.418370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.418488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.418518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.418665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.418697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.418802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.418833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.419126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.419170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.419306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.419341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.419512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.419544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.419713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.419744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.419918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.419952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.420122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.420153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.420289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.420322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.420428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.420458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.420744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.420777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.420947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.420978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.421173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.421205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.421374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.421412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.421530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.421561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.421724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.421755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.421946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.607 [2024-12-10 12:36:53.421977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.607 qpair failed and we were unable to recover it. 00:28:31.607 [2024-12-10 12:36:53.422090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.422121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.422313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.422345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.422531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.422562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.422727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.422757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.422871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.422903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.423005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.423035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.423201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.423234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.423496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.423527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.423647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.423678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.423890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.423920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.424033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.424064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.424240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.424273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.424474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.424505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.424701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.424731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.424900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.424931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.425097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.425127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.425347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.425416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.425547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.425583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.425702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.425734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.425922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.425954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.426069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.426100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.426377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.426410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.426605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.426636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.426749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.426789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.426988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.427026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.427144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.427190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.427430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.427460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.427626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.427657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.427766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.427797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.427916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.427947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.428111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.428142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.428335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.428367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.428559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.428590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.428852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.428883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.429055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.429085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.429276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.429309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.429432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.429464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.429686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.429717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.429883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.608 [2024-12-10 12:36:53.429914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.608 qpair failed and we were unable to recover it. 00:28:31.608 [2024-12-10 12:36:53.430098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.430129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.430245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.430283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.430415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.430446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.430659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.430690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.430803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.430839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.430958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.430989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.431101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.431132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.431259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.431292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.431414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.431445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.431697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.431729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.431913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.431944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.432188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.432260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.432461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.432497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.432694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.432724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.432896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.432927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.433062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.433092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.433274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.433307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.433500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.433531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.433634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.433666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.433837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.433867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.434032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.434063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.434304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.434338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.434515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.434545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.434717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.434747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.434851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.434893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.435131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.435173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.435372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.435403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.435570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.435600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.435730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.435762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.435876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.435907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.436088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.436118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.436300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.436332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.436451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.436483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.436615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.436645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.436827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.436858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.437027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.437057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.437170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.437202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.437316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.437346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.437463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.437494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.609 qpair failed and we were unable to recover it. 00:28:31.609 [2024-12-10 12:36:53.437664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.609 [2024-12-10 12:36:53.437694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.437960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.437991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.438108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.438138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.438263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.438294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.438408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.438439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.438550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.438582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.438822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.438851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.439045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.439075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.439268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.439301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.439422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.439452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.439638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.439669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.439864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.439895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.440112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.440194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.440396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.440434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.440574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.440606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.440774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.440805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.441000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.441031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.441197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.441230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.441434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.441467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.441578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.441609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.441715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.441747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.441911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.441942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.442052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.442083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.442346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.442378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.442572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.442603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.442768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.442799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.443021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.443052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.443224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.443256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.443389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.443419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.443624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.443656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.443856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.443887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.443993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.444025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.444209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.444241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.444409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.444439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.444706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.444737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.444998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.445030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.445203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.445235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.445367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.445398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.445578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.445607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.610 qpair failed and we were unable to recover it. 00:28:31.610 [2024-12-10 12:36:53.445721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.610 [2024-12-10 12:36:53.445752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.445922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.445953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.446059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.446090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.446286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.446318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.446448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.446479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.446608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.446637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.446807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.446838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.447004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.447035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.447206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.447237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.447496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.447528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.447648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.447678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.447795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.447825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.447993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.448025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.448142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.448188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.448309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.448340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.448514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.448544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.448733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.448764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.448940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.448970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.449179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.449210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.449376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.449407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.449575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.449606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.449722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.449752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.449929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.449961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.450089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.450120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.450243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.450275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.450448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.450479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.450647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.450679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.450791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.450822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.450936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.450967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.451090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.451121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.451268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.451301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.451410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.611 [2024-12-10 12:36:53.451441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.611 qpair failed and we were unable to recover it. 00:28:31.611 [2024-12-10 12:36:53.451610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.451642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.451809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.451839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.452024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.452055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.452175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.452207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.452375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.452406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.452542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.452572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.452673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.452704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.452895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.452926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.453103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.453134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.453280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.453310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.453475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.453507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.453670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.453701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.453821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.453852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.454018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.454049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.454153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.454196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.454306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.454337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.454451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.454482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.454583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.454614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.454789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.454821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.455081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.455111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.455383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.455416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.455600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.455636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.455804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.455835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.456002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.456033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.456217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.456249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.456353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.456384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.456495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.456525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.456642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.456673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.456803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.456833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.457014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.457046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.457239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.457271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.457385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.457415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.457581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.457611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.457729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.457761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.457952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.457982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.458253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.458285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.458482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.458514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.458682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.458713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.458883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.612 [2024-12-10 12:36:53.458914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.612 qpair failed and we were unable to recover it. 00:28:31.612 [2024-12-10 12:36:53.459129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.459180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.459279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.459311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.459490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.459521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.459715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.459747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.459919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.459950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.460149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.460189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.460385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.460416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.460531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.460563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.460662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.460693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.460819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.460850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.460951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.460981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.461146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.461188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.461356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.461387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.461554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.461585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.461690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.461721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.461893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.461924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.462039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.462069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.462307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.462339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.462538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.462569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.462737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.462767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.462936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.462968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.463139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.463188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.463385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.463423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.463625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.463656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.463830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.463862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.464027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.464057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.464168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.464200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.464459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.464490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.464593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.464625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.464790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.464820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.464933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.464964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.465079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.465109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.465231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.465262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.465384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.465415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.465599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.465631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.465804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.465833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.465943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.465975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.466093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.466124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.466309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.613 [2024-12-10 12:36:53.466341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.613 qpair failed and we were unable to recover it. 00:28:31.613 [2024-12-10 12:36:53.466447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.466477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.466641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.466672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.466877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.466907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.467024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.467055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.467219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.467253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.467363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.467393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.467559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.467589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.467840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.467871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.467983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.468013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.468196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.468227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.468495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.468525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.468644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.468675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.468867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.468897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.469096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.469127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.469359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.469403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.469513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.469544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.469720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.469751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.469854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.469884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.470091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.470121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.470250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.470281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.470413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.470444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.470636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.470665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.470781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.470812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.470927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.470965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.471071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.471101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.471229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.471261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.471430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.471461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.471574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.471604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.471706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.471738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.471906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.471935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.472099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.472129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.472326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.472364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.472548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.472579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.472770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.472801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.472914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.472945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.473045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.473076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.473206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.473238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.473425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.473456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.473626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.473658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.614 qpair failed and we were unable to recover it. 00:28:31.614 [2024-12-10 12:36:53.473822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.614 [2024-12-10 12:36:53.473852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.473964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.473995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.474179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.474212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.474385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.474415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.474614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.474645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.474758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.474789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.474909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.474940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.475105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.475135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.475312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.475342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.475508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.475538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.475664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.475694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.475869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.475900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.476167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.476198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.476307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.476338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.476504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.476534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.476724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.476755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.476887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.476917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.477030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.477060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.477234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.477265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.477369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.477399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.477585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.477615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.477848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.477878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.478057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.478088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.478206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.478237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.478364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.478401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.478584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.478614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.478725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.478756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.478925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.478955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.479120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.479151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.479336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.479368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.479569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.479599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.479701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.479731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.479985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.480015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.480207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.480238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.480371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.480401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.615 qpair failed and we were unable to recover it. 00:28:31.615 [2024-12-10 12:36:53.480576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.615 [2024-12-10 12:36:53.480606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.480714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.480745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.480847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.480877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.481053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.481084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.481205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.481237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.481349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.481379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.481549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.481581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.481751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.481782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.481962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.481993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.482254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.482286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.482455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.482485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.482637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.482667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.482785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.482816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.482991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.483021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.483281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.483312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.483495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.483526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.483798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.483830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.484093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.484123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.484303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.484335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.484451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.484481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.484596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.484625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.484810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.484841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.484947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.484979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.485189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.485221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.485393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.485424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.485615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.485646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.485814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.485845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.486011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.486041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.486230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.486263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.486468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.486505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.486609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.486640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.486748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.486779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.486964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.486994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.487185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.487218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.487322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.487354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.487595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.487625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.487735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.487767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.487970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.488001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.488128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.616 [2024-12-10 12:36:53.488169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.616 qpair failed and we were unable to recover it. 00:28:31.616 [2024-12-10 12:36:53.488270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.488302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.488469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.488500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.488665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.488696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.488901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.488933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.489119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.489151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.489343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.489373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.489635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.489665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.489783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.489813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.489980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.490011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.490179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.490211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.490311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.490342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.490525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.490555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.490766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.490797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.490916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.490947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.491133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.491182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.491374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.491404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.491598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.491629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.491747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.491778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.491964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.491995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.492182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.492214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.492396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.492426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.492566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.492596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.492784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.492816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.492986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.493016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.493136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.493177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.493348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.493380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.493550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.493581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.493700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.493731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.493925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.493956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.494146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.494187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.494293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.494329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.494592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.494622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.494789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.494821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.495021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.495051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.495183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.495214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.495384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.495414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.495582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.495613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.495786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.495816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.496018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.617 [2024-12-10 12:36:53.496049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.617 qpair failed and we were unable to recover it. 00:28:31.617 [2024-12-10 12:36:53.496216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.496249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.496353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.496384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.496558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.496588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.496757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.496788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.497048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.497077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.497258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.497290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.497409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.497440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.497618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.497649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.497773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.497803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.497969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.497999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.498179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.498211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.498320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.498351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.498457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.498487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.498653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.498684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.498849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.498880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.499060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.499091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.499210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.499242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.499442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.499472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.499581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.499612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.499803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.499832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.500029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.500060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.500199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.500231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.500438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.500468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.500584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.500616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.500782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.500812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.500983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.501014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.501120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.501150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.501287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.501318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.501430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.501461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.501652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.501682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.501780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.501810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.501991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.502027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.502215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.502246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.502364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.502394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.502560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.502591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.502725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.502755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.502872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.502903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.503032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.503062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.503230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.618 [2024-12-10 12:36:53.503262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.618 qpair failed and we were unable to recover it. 00:28:31.618 [2024-12-10 12:36:53.503457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.503487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.503653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.503684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.503848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.503879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.503979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.504009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.504118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.504149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.504334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.504365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.504555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.504586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.504696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.504726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.504894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.504924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.505092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.505122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.505283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.505315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.505503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.505534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.505662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.505693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.505828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.505858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.505991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.506021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.506121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.506152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.506280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.506310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.506479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.506510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.506697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.506728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.506840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.506871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.507039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.507070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.507202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.507235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.507339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.507387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.507570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.507600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.507768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.507800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.507910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.507940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.508115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.508145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.508263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.508294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.508487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.508517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.508715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.508745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.508852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.508884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.509052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.509083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.509199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.509242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.509433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.509464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.509674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.509704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.509886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.509917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.510034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.510065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.510237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.510269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.510450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.619 [2024-12-10 12:36:53.510481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.619 qpair failed and we were unable to recover it. 00:28:31.619 [2024-12-10 12:36:53.510667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.510698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.510865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.510895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.511069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.511106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.511308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.511341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.511445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.511475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.511590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.511621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.511788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.511819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.511945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.511977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.512144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.512185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.512425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.512456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.512670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.512701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.512895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.512925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.513091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.513120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.513320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.513352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.513534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.513564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.513734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.513765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.513970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.514000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.514200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.514231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.514350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.514381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.514493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.514523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.514796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.514828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.515044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.515074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.515239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.515270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.515438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.515468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.515639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.515669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.515834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.515865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.516140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.516179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.516373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.516404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.516569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.516600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.516781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.516811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.516972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.517003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.517171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.517203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.517307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.517336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.517500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.517536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.620 qpair failed and we were unable to recover it. 00:28:31.620 [2024-12-10 12:36:53.517699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.620 [2024-12-10 12:36:53.517729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.517897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.517928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.518185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.518217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.518333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.518364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.518616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.518645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.518815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.518845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.519030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.519061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.519204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.519237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.519424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.519454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.519566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.519596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.519774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.519805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.519972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.520002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.520203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.520235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.520352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.520384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.520486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.520517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.520687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.520718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.521000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.521030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.521312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.521344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.521527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.521557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.521728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.521759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.521930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.521960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.522129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.522169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.522341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.522371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.522545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.522576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.522744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.522774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.522960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.522991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.523107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.523138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.523315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.523346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.523530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.523561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.523729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.523759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.523926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.523957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.524148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.524189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.524396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.524427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.524546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.524576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.524737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.524768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.524928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.524958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.525129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.525170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.525298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.525328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.525491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.525522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.621 qpair failed and we were unable to recover it. 00:28:31.621 [2024-12-10 12:36:53.525643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.621 [2024-12-10 12:36:53.525679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.525801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.525831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.525941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.525971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.526230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.526261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.526376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.526407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.526579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.526610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.526710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.526741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.527015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.527045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.527183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.527215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.527332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.527363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.527615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.527644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.527756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.527786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.527885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.527916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.528083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.528112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.528312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.528343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.528513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.528544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.528724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.528754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.528853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.528883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.529055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.529085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.529189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.529221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.529324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.529355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.529522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.529552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.529764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.529795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.529995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.530025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.530208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.530241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.530418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.530449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.530553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.530583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.530916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.530985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.531142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.531189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.531309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.531341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.531452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.531483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.531657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.531688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.531857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.531888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.532087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.532118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.532295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.532328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.532500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.532530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.532698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.532730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.532897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.532928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.533132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.533172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.622 qpair failed and we were unable to recover it. 00:28:31.622 [2024-12-10 12:36:53.533353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.622 [2024-12-10 12:36:53.533384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.533487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.533527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.533726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.533757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.533935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.533966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.534093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.534123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.534239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.534271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.534403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.534433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.534536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.534567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.534686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.534717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.534887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.534918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.535028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.535058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.535320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.535353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.535486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.535517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.535770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.535801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.535978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.536009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.536233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.536265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.536448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.536478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.536579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.536610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.536740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.536771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.536890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.536921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.537120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.537150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.537348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.537380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.537572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.537602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.537783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.537815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.538021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.538051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.538243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.538275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.538445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.538475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.538703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.538733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.538945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.538977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.539145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.539186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.539382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.539412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.539515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.539546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.539804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.539835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.539950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.539981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.540090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.540121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.540267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.540299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.540472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.540503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.540691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.540721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.540889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.540920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.623 qpair failed and we were unable to recover it. 00:28:31.623 [2024-12-10 12:36:53.541032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.623 [2024-12-10 12:36:53.541062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.541180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.541213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.541383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.541414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.541599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.541630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.541738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.541769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.541869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.541898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.542008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.542039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.542142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.542181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.542397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.542428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.542546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.542576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.542839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.542870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.543060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.543091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.543275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.543306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.543476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.543506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.543608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.543639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.543807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.543838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.544081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.544112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.544242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.544273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.544438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.544469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.544588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.544618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.544720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.544751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.544947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.544977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.545169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.545201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.545396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.545426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.545545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.545576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.545769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.545799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.545997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.546028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.546230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.546262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.546460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.546491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.546612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.546648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.546915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.546946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.547115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.547146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.547288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.547319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.547522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.547552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.547678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.624 [2024-12-10 12:36:53.547709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.624 qpair failed and we were unable to recover it. 00:28:31.624 [2024-12-10 12:36:53.547898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.547929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.548095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.548125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.548235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.548267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.548380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.548411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.548581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.548611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.548725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.548756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.548925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.548955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.549054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.549085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.549262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.549294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.549403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.549435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.549545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.549574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.549742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.549774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.549955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.549986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.550101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.550132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.550346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.550377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.550540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.550571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.550752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.550783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.550890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.550921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.551068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.551099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.551345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.551377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.551645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.551676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.551852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.551883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.552098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.552130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.552323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.552354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.552544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.552575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.552742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.552772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.552897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.552929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.553074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.553103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.553322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.553354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.553564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.553595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.553698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.553729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.553843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.553873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.554042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.554074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.554185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.554217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.554452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.554488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.554747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.554779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.554894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.554924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.555107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.555138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.555266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.625 [2024-12-10 12:36:53.555298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.625 qpair failed and we were unable to recover it. 00:28:31.625 [2024-12-10 12:36:53.555420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.555451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.555571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.555601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.555712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.555743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.555850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.555881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.556071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.556101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.556278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.556310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.556489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.556519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.556685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.556715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.556905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.556936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.557139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.557180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.557314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.557344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.557547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.557578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.557690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.557721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.557916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.557947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.558209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.558241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.558408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.558439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.558571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.558600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.558780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.558811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.559030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.559061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.559344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.559376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.559542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.559573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.559764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.559794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.559972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.560003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.560136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.560177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.560299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.560330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.560443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.560474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.560595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.560625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.560732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.560762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.560954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.560985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.561155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.561195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.561384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.561415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.561599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.561629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.561818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.561849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.561972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.562002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.562192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.562224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.562393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.562430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.562607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.562637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.562824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.562855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.626 [2024-12-10 12:36:53.563047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.626 [2024-12-10 12:36:53.563078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.626 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.563270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.563302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.563413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.563443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.563630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.563660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.563854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.563885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.564051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.564081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.564183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.564215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.564328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.564360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.564548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.564578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.564703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.564735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.564906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.564936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.565130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.565169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.565339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.565370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.565473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.565503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.565670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.565702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.565866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.565897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.566081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.566112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.566287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.566318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.566486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.566517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.566696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.566726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.566853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.566883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.567053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.567083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.567251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.567283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.567483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.567513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.567640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.567673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.567781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.567810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.567920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.567951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.568117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.568148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.568333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.568364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.568548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.568578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.568693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.568724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.568912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.568942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.569124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.569155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.569337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.569372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.569541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.569571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.569741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.569772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.569901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.569931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.570191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.570229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.570410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.570440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.627 [2024-12-10 12:36:53.570608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.627 [2024-12-10 12:36:53.570638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.627 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.570752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.570782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.570884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.570914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.571106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.571137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.571426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.571456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.571645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.571676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.571849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.571880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.571989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.572019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.572195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.572227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.572402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.572432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.572607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.572638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.572743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.572773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.572990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.573021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.573197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.573228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.573398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.573428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.573611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.573642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.573754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.573784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.573956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.573987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.574101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.574131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.574377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.574408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.574512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.574542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.574652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.574683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.574803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.574834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.575007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.575038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.575213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.575244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.575351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.575382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.575550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.575581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.575747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.575778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.575880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.575910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.576077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.576108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.576285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.576317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.576483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.576513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.576624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.576654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.576892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.576922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.577037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.577067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.577235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.577267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.577456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.577486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.577748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.577778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.577898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.577935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.628 [2024-12-10 12:36:53.578108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.628 [2024-12-10 12:36:53.578138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.628 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.578325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.578358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.578474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.578504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.578695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.578725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.578906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.578937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.579107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.579137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.579385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.579416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.579529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.579560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.579668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.579699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.579868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.579899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.580087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.580118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.580300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.580331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.580502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.580532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.580758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.580789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.580909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.580939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.581050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.581080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.581193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.581225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.581342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.581373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.581489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.581519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.581713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.581743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.581853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.581884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.582067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.582098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.582284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.582317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.582418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.582449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.582615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.582645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.582827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.582857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.583076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.583107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.583285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.583316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.583507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.583537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.583710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.583741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.583857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.583887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.584055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.584086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.584234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.584283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.584477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.584507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.584632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.584663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.629 qpair failed and we were unable to recover it. 00:28:31.629 [2024-12-10 12:36:53.584779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.629 [2024-12-10 12:36:53.584809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.584975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.585005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.585180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.585213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.585422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.585452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.585586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.585622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.585795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.585826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.586063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.586093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.586213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.586245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.586413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.586444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.586546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.586577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.586692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.586722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.586849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.586879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.587056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.587086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.587207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.587238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.587338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.587368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.587534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.587565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.587731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.587762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.587927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.587957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.588132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.588170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.588416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.588446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.588617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.588648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.588905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.588936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.589047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.589078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.589209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.589241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.589416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.589447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.589549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.589579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.589776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.589807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.589996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.590027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.590211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.590242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.590433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.590464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.590578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.590608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.590799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.590830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.591017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.591048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.591256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.591287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.591476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.591507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.591610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.591641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.591844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.591875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.591980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.592011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.630 [2024-12-10 12:36:53.592115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.630 [2024-12-10 12:36:53.592145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.630 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.592377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.592408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.592525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.592555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.592666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.592697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.592800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.592830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.592934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.592965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.593173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.593210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.593326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.593357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.593564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.593595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.593726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.593756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.593859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.593889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.594098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.594129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.594320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.594352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.594530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.594561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.594738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.594769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.594881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.594912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.595110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.595141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.595332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.595363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.595461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.595492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.595734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.595765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.595943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.595973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.596172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.596204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.596371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.596402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.596518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.596548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.596812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.596842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.597010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.597041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.597275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.597307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.597476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.597506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.597700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.597730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.597901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.597932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.598045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.598076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.598194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.598225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.598341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.598371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.598424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1582b20 (9): Bad file descriptor 00:28:31.631 [2024-12-10 12:36:53.598707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.598776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.598920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.598954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.599113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.599145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.599363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.599395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.599636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.599668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.599863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.599894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.631 qpair failed and we were unable to recover it. 00:28:31.631 [2024-12-10 12:36:53.600063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.631 [2024-12-10 12:36:53.600093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.600198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.600231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.600399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.600431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.600599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.600630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.600800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.600831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.601012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.601043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.601213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.601245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.601367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.601398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.601565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.601595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.601846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.601877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.602011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.602041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.602226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.602258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.602475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.602507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.602679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.602710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.602819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.602848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.603020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.603051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.603178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.603210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.603336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.603367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.603534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.603565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.603824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.603857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.603964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.604000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.604194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.604226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.604409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.604440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.604608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.604639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.604827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.604857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.604959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.604990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.605228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.605259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.605425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.605456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.605562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.605593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.605772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.605802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.605922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.605953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.606117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.606148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.606264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.606295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.606396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.606426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.606705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.606736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.606857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.606887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.607004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.607035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.607294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.607325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.607628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.607658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.632 qpair failed and we were unable to recover it. 00:28:31.632 [2024-12-10 12:36:53.607824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.632 [2024-12-10 12:36:53.607855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.607966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.607996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.608120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.608151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.608333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.608363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.608465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.608496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.608607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.608637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.608831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.608862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.608994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.609025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.609127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.609169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.609301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.609331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.609521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.609553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.609723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.609753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.609918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.609949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.610204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.610236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.610409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.610439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.610544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.610576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.610759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.610790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.610982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.611013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.611199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.611231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.611396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.611428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.611545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.611575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.611744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.611780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.611885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.611917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.612067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.612098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.612322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.612354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.612585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.612615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.612731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.612761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.612865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.612895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.613009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.613040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.613280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.613312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.613442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.613472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.613584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.613615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.613720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.613751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.613957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.613988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.633 [2024-12-10 12:36:53.614200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.633 [2024-12-10 12:36:53.614233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.633 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.614433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.614464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.614581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.614612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.614839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.614869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.614997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.615028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.615130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.615169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.615372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.615402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.615625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.615655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.615760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.615791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.615959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.615990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.616094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.616124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.616304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.616336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.616503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.616532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.616784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.616814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.616942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.616974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.617176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.617208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.617379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.617410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.617537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.617567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.617741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.617772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.617872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.617902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.618104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.618135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.618318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.618350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.618463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.618494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.618663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.618693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.618900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.618930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.619176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.619208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.619324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.619355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.619459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.619495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.619705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.619736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.619857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.619887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.620053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.620084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.620196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.620228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.620430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.620460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.620569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.620600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.620766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.620796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.621055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.621084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.621274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.634 [2024-12-10 12:36:53.621306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.634 qpair failed and we were unable to recover it. 00:28:31.634 [2024-12-10 12:36:53.621472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.621503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.621676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.621706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.621899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.621928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.622100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.622131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.622281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.622313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.622481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.622512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.622630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.622660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.622910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.622941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.623106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.623136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.623403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.623435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.623603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.623633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.623745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.623776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.623891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.623920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.624130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.624170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.624273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.624304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.624436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.624467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.624632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.624662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.624791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.624822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.625002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.625031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.625144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.625187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.625291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.625321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.625570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.625601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.625863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.625893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.626061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.626091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.626265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.626296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.626410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.626440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.626540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.626571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.626706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.626737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.626853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.626883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.626982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.627014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.627236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.627274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.627444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.627474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.627666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.627698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.627909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.627939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.628051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.628081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.628247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.628279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.628451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.628482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.628586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.628616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.628746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.635 [2024-12-10 12:36:53.628777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.635 qpair failed and we were unable to recover it. 00:28:31.635 [2024-12-10 12:36:53.628896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.628927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.629114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.629145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.629320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.629350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.629451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.629482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.629610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.629640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.629778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.629809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.629911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.629941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.630107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.630138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.630329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.630360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.630474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.630504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.630670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.630700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.630890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.630921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.631035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.631065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.631282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.631314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.631478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.631512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.631709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.631740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.631911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.631941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.632116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.632148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.632289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.632321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.632429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.632460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.632647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.632678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.632791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.632821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.632929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.632959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.633074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.633106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.633283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.633313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.633509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.633540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.633725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.633756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.633936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.633967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.634143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.634204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.634439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.634469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.634594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.634625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.634747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.634784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.634893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.634924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.635092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.635123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.635260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.635291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.635460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.635491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.635726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.635756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.635931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.635962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.636178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.636 [2024-12-10 12:36:53.636211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.636 qpair failed and we were unable to recover it. 00:28:31.636 [2024-12-10 12:36:53.636327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.636359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.636470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.636500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.636693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.636725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.636837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.636868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.637061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.637092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.637195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.637227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.637351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.637384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.637500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.637530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.637640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.637672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.637775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.637805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.637905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.637935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.638100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.638130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.638369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.638439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.638605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.638675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.638874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.638910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.639052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.639084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.639194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.639229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.639350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.639381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.639544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.639575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.639771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.639811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.639940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.639973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.640085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.640116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.640242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.640274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.640446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.640477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.640581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.640612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.640780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.640811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.641019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.641050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.641308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.641341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.641523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.641554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.641675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.641706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.641882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.641914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.642082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.642113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.642290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.642322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.642456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.642488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.642616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.642646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.642769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.642800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.642968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.642998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.643191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.643225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.643337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.637 [2024-12-10 12:36:53.643366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.637 qpair failed and we were unable to recover it. 00:28:31.637 [2024-12-10 12:36:53.643482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.643512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.643625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.643656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.643763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.643794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.643896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.643927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.644128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.644166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.644268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.644298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.644482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.644513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.644706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.644743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.644945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.644976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.645178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.645210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.645322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.645352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.645523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.645553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.645668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.645699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.645800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.645830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.646003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.646035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.646147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.646195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.646459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.646489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.646600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.646631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.646730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.646760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.646946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.646976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.647094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.647126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.647251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.647283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.647470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.647501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.647670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.647701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.647873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.647904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.648017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.648047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.648175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.648207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.648307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.648338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.648531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.648562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.648676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.648706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.648811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.648842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.648944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.648974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.649139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.649183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.649419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.649450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.638 [2024-12-10 12:36:53.649699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.638 [2024-12-10 12:36:53.649730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.638 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.649845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.649875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.649985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.650023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.650216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.650248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.650363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.650394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.650575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.650605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.650778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.650809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.650930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.650960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.651063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.651094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.651202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.651234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.651431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.651461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.651562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.651593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.651775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.651806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.651997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.652033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.652152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.652191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.652366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.652397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.652565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.652596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.652791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.652821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.652921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.652951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.653062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.653093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.653215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.653246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.653373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.653404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.653601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.653632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.653736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.653766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.653889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.653919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.654084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.654115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.654264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.654296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.654421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.654454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.654569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.654599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.654864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.654895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.655029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.655059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.655178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.655213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.655339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.655370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.655476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.655507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.655643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.655672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.655848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.655879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.655992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.656023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.656201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.656232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.656400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.656432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.639 [2024-12-10 12:36:53.656601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.639 [2024-12-10 12:36:53.656633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.639 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.656823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.656854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.657048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.657079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.657202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.657234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.657432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.657462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.657639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.657670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.657780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.657810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.657928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.657959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.658069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.658100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.658284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.658317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.658441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.658471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.658585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.658616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.658785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.658815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.659026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.659057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.659171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.659209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.659376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.659409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.659677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.659708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.659989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.660022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.660192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.660225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.660403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.660433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.660629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.660660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.660774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.660805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.660906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.660937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.661052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.661083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.661191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.661224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.661330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.661362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.661548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.661578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.661771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.661803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.661921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.661952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.662138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.662179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.662306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.662335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.662462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.662493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.662662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.662692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.662864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.662895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.663013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.663043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.663209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.663241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.663353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.663383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.663558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.663588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.663804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.663836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.663956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.640 [2024-12-10 12:36:53.663985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.640 qpair failed and we were unable to recover it. 00:28:31.640 [2024-12-10 12:36:53.664091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.664124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.664278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.664311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.664414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.664444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.664562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.664593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.664757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.664788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.664884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.664915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.665013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.665043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.665227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.665260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.665394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.665426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.665602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.665633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.665805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.665836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.666039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.666068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.666183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.666215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.666384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.666415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.666583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.666619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.666788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.666820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.666922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.666953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.667069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.667100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.667228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.667260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.667372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.667403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.667507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.667538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.667638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.667669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.667876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.667906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.668005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.668037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.668138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.668176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.668291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.668323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.668441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.668471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.668633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.668663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.668758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.668789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.668950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.668980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.669110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.669141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.669270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.669301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.669416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.669450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.669617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.669648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.669763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.669793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.669928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.669960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.670067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.670099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.670222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.670254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.670462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.641 [2024-12-10 12:36:53.670494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.641 qpair failed and we were unable to recover it. 00:28:31.641 [2024-12-10 12:36:53.670596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.670628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.670737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.670769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.670877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.670910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.671026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.671058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.671180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.671214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.671402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.671433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.671611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.671642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.671776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.671806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.671915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.671946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.672045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.672076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.672230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.672263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.672379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.672409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.672583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.672614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.672715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.672746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.672916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.672946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.673124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.673170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.673285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.673317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.673499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.673531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.673707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.673737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.673862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.673893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.674004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.674035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.674145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.674185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.674293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.674323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.674437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.674469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.674708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.674739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.674841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.674871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.674973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.675004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.675182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.675214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.675330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.675359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.675551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.675583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.675752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.675783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.675951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.642 [2024-12-10 12:36:53.675982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.642 qpair failed and we were unable to recover it. 00:28:31.642 [2024-12-10 12:36:53.676081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.676111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.676318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.676350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.676593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.676625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.676758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.676790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.676911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.676944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.677046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.677078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.677187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.677219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.677345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.677376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.677482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.677512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.677630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.677662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.677841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.677873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.678111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.678142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.678325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.678357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.678616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.678647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.678750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.678780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.678889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.678921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.679037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.679068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.679184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.679216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.679318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.679349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.679521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.679553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.679654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.679685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.679801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.679844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.679938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.679968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.680068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.680101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.680223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.680252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.680360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.680389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.680560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.680591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.680783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.680810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.680920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.680950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.681114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.681141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.681264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.681294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.681405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.681432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.681606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.681637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.681755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.681786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.681894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.681925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.682042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.682072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.682239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.682270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.682385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.682413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.682521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.643 [2024-12-10 12:36:53.682548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.643 qpair failed and we were unable to recover it. 00:28:31.643 [2024-12-10 12:36:53.682647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.682675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.682860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.682888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.682985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.683013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.683119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.683146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.683318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.683348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.683510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.683537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.683695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.683724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.683888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.683916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.684021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.684049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.684148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.684186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.684311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.684340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.684455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.684483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.684643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.684671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.684770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.684798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.684899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.684927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.685040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.685068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.685232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.685262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.685356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.685383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.685489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.685517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.685744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.685773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.685884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.685913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.686012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.686040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.686165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.686194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.686395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.686422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.686528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.686563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.686748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.686776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.686885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.686913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.687016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.687044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.687210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.687239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.687400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.687429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.687550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.687578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.687687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.687714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.687874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.687902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.688087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.688118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.688242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.688290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.688505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.688534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.688722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.688751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.688867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.688895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.689024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.644 [2024-12-10 12:36:53.689053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.644 qpair failed and we were unable to recover it. 00:28:31.644 [2024-12-10 12:36:53.689150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.689201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.689297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.689325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.689497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.689525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.689622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.689651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.689743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.689771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.689933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.689962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.690127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.690152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.690278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.690304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.690389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.690415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.690690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.690759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.690930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.691000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.691123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.691150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.691360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.691386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.691474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.691499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.691684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.691711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.691881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.691906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.692007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.692034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.692134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.692176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.692273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.692300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.692456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.692482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.692634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.692660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.692823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.692848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.692941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.692967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.693132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.693168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.693328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.693355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.693514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.693545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.693804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.693831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.693920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.693945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.694134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.694168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.694268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.694294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.694491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.694516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.694636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.694666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.694757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.694783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.694891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.694916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.695009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.695035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.695288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.695318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.695506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.695532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.695662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.695688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.695874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.645 [2024-12-10 12:36:53.695900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.645 qpair failed and we were unable to recover it. 00:28:31.645 [2024-12-10 12:36:53.695994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.696021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.696200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.696226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.696389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.696415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.696570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.696597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.696756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.696782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.696876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.696901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.697084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.697110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.697226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.697253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.697411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.697438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.697609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.697636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.697809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.697837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.697925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.697951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.698203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.698230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.698428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.698469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.698658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.698688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.698789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.698820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.698921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.698953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.699061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.699091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.699204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.699237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.699349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.699380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.699503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.699533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.699639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.699669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.699886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.699916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.700175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.700208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.700341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.700372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.700470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.700501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.700630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.700670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.700774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.700805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.700905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.700935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.701148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.701192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.701296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.701326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.701440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.701470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.701643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.701673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.701874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.701905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.702024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.702055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.702227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.702258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.702365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.702395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.702579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.702610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.702713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.646 [2024-12-10 12:36:53.702743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.646 qpair failed and we were unable to recover it. 00:28:31.646 [2024-12-10 12:36:53.702853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.702883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.703019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.703050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.703267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.703299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.703409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.703439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.703555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.703586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.703708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.703737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.703929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.703959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.704070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.704100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.704226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.704257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.704459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.704490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.704595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.704626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.704743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.704774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.704903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.704933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.705031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.705061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.705307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.705378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.705591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.705627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.705733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.705765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.705949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.705981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.706175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.706208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.706403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.706435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.706675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.706705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.706881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.706913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.707090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.707121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.707246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.707279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.707405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.707435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.707537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.707569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.707679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.707710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.707879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.707920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.708048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.708079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.708202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.708236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.708340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.708373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.708612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.708644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.708760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.708791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.708899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.708929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.647 [2024-12-10 12:36:53.709109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.647 [2024-12-10 12:36:53.709141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.647 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.709264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.709296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.709423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.709455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.709589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.709621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.709802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.709833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.709947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.709979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.710083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.710115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.710309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.710342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.710463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.710494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.710674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.710707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.710812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.710843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.711013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.711055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.711228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.711260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.711377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.711407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.711517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.711548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.711652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.711682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.711789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.711819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.711937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.711968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.712136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.712182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.712299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.712330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.712572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.712641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.712842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.712877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.713051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.713084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.713213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.713246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.713509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.713539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.713642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.713672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.713779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.713809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.713982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.714012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.714114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.714145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.714278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.714309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.714413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.714444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.714555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.714586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.714685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.714715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.714885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.714915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.715102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.715133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.715247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.715278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.715390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.715419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.715529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.715560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.715744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.715774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.648 qpair failed and we were unable to recover it. 00:28:31.648 [2024-12-10 12:36:53.715883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.648 [2024-12-10 12:36:53.715913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.716087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.716118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.716232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.716264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.716426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.716456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.716563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.716593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.716693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.716724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.716825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.716855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.716967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.716997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.717179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.717217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.717395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.717426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.717528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.717559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.717746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.717776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.717882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.717912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.718114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.718144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.718261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.718292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.718467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.718498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.718668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.718699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.718870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.718901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.719073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.719105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.719245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.719280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.719384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.719415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.719528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.719558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.719682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.719714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.719881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.719924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.720111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.720145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.720287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.720318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.720462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.720493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.720605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.720636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.720743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.720775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.720989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.721021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.721143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.721188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.721303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.721334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.721442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.721472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.721591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.721621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.721725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.721762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.721932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.721970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.722084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.722114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.722307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.722340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.722512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.722544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.722669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.722700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.722819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.722849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.722953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.722984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.723090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.649 [2024-12-10 12:36:53.723119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.649 qpair failed and we were unable to recover it. 00:28:31.649 [2024-12-10 12:36:53.723301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.723333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.723462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.723502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.723687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.723719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.723831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.723863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.723970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.724002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.724113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.724145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.724273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.724305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.724530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.724562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.724681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.724713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.724888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.724920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.725096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.725128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.725353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.725421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.725607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.725676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.725813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.725849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.725960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.725994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.726111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.726142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.726333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.726365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.726491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.726522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.726689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.726719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.726900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.726937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.727053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.727086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.727260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.727293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.727405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.727436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.727570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.727603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.727735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.727766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.727868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.727899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.728002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.728033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.728172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.728209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.728316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.728347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.728455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.728485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.728589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.728619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.728811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.728841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.728962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.728993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.729106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.729136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.729267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.729299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.729465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.729496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.729617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.729647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.729779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.729815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.729930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.729960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.730071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.730102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.730230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.730262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.730389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.730423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.650 [2024-12-10 12:36:53.730528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.650 [2024-12-10 12:36:53.730559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.650 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.730665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.730697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.730862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.730893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.730996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.731027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.731143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.731185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.731304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.731335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.731456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.731488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.731592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.731623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.731727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.731758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.731947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.731982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.732085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.732115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.732249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.732282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.732399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.732429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.732602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.732633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.732738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.732769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.732889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.732921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.733022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.733053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.733174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.733212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.733385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.733416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.733539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.733578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.733772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.733803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.733904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.733935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.734107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.734138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.734268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.734299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.734410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.734441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.734555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.734585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.734709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.734740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.734841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.734872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.734978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.735009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.735123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.735187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.735375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.735408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.735588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.735620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.735726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.735756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.735869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.735905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.736089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.736119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.736306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.736337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.736451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.651 [2024-12-10 12:36:53.736482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.651 qpair failed and we were unable to recover it. 00:28:31.651 [2024-12-10 12:36:53.736600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.736630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.736739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.736769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.736893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.736927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.737051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.737081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.737193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.737226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.737338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.737366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.737616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.737647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.737831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.737868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.737974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.738005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.738195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.738229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.738411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.738442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.738548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.738579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.738682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.738713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.738909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.738940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.739052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.739083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.739210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.739243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.739359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.739391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.739594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.739624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.739726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.739756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.739930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.739964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.740139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.740187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.740295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.740326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.740443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.740474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.740645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.740676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.740940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.740970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.741197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.741230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.741332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.652 [2024-12-10 12:36:53.741363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.652 qpair failed and we were unable to recover it. 00:28:31.652 [2024-12-10 12:36:53.741469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.741500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.741788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.741819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.742035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.742066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.742183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.742214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.742345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.742376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.742490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.742521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.742723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.742754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.742864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.742895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.743063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.743094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.743219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.743251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.743415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.743447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.743663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.743693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.743861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.743892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.744092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.744123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.744252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.744285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.744398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.744429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.744551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.744582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.744756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.744787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.744898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.744929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.745028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.745059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.745285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.745354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.745539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.745608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.745747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.745793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.745930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.745962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.746084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.746114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.653 [2024-12-10 12:36:53.746226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.653 [2024-12-10 12:36:53.746259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.653 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.746478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.746511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.746705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.746735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.746857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.746887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.747007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.747039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.747151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.747191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.747360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.747391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.747507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.747538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.747659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.747689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.747807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.747838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.747939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.747972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.748100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.748130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.748347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.748417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.748608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.748643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.748757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.748788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.748916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.748949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.749192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.939 [2024-12-10 12:36:53.749224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.939 qpair failed and we were unable to recover it. 00:28:31.939 [2024-12-10 12:36:53.749347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.749377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.749542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.749573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.749682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.749711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.749837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.749867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.749979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.750009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.750118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.750168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.750286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.750317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.750510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.750541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.750714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.750745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.750916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.750947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.751115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.751147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.751287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.751319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.751439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.751470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.751703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.751734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.751913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.751944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.752110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.752141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.752269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.752300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.752421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.752452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.752555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.752594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.752727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.752758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.752882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.752914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.753087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.753120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.753340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.753373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.753546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.753578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.753774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.753804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.753982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.754012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.754190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.754223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.754414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.754446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.754553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.754590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.754763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.754793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.754904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.754935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.755112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.755145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.755352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.755391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.755504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.755534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.755661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.755693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.755811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.755842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.756030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.756062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.756230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.756264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.940 [2024-12-10 12:36:53.756445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.940 [2024-12-10 12:36:53.756478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.940 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.756610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.756640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.756830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.756862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.756971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.757004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.757107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.757137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.757252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.757284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.757482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.757518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.757659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.757718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.757934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.757973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.758145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.758189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.758291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.758323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.758438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.758468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.758569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.758601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.758703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.758733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.758844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.758876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.758977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.759008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.759114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.759145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.759276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.759307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.759418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.759448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.759554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.759584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.759701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.759731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.759855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.759887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.760055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.760085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.760290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.760321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.760440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.760471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.760678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.760709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.760827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.760857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.761056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.761086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.761187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.761220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.761392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.761421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.761538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.761568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.761692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.761723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.761914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.761946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.762125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.762155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.762334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.762371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.762541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.762573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.762703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.762734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.762898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.762928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.763041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.763071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.763171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.941 [2024-12-10 12:36:53.763202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.941 qpair failed and we were unable to recover it. 00:28:31.941 [2024-12-10 12:36:53.763311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.763341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.763446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.763475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.763637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.763667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.763777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.763808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.763978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.764010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.764176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.764206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.764388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.764418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.764599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.764631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.764906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.764937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.765052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.765083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.765195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.765228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.765345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.765377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.765487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.765517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.765626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.765657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.765822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.765854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.765969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.766001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.766113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.766144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.766270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.766301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.766406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.766436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.766538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.766568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.766759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.766789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.767032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.767082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.767250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.767283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.767386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.767418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.767664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.767695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.767797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.767827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.767938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.767968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.768072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.768102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.768348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.768379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.768483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.768513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.768631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.768662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.768761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.768791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.768915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.768946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.769059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.769089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.769213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.769247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.769366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.769396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.769506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.769536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.769645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.769675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.769864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.769894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.942 qpair failed and we were unable to recover it. 00:28:31.942 [2024-12-10 12:36:53.770077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.942 [2024-12-10 12:36:53.770108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.770249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.770282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.770483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.770521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.770622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.770652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.770820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.770850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.770976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.771007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.771118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.771148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.771279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.771310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.771428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.771458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.771632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.771667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.771802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.771833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.771935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.771964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.772072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.772103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.772230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.772261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.772382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.772412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.772528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.772559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.772683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.772714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.772879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.772909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.773008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.773038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.773210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.773243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.773359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.773388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.773493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.773523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.773623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.773651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.773785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.773827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.773950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.773986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.774101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.774133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.774275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.774308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.774413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.774448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.774556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.774591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.774692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.774724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.774885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.774918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.775064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.775096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.775219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.775252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.775370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.775406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.775523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.775556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.943 [2024-12-10 12:36:53.775670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.943 [2024-12-10 12:36:53.775702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.943 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.775873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.775913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.776154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.776199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.776324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.776363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.776469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.776500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.776675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.776708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.776836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.776869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.776985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.777022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.777193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.777228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.777356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.777388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.777505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.777537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.777715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.777748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.777916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.777953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.778073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.778105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.778224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.778257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.778451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.778483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.778604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.778635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.778759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.778795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.778931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.778963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.779072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.779104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.779326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.779363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.779484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.779516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.779690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.779722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.779836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.779868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.780032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.780065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.780196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.780229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.780365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.780398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.780502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.780534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.780656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.780692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.780869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.780901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.781009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.781040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.781172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.781204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.944 qpair failed and we were unable to recover it. 00:28:31.944 [2024-12-10 12:36:53.781463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.944 [2024-12-10 12:36:53.781494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.781616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.781647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.781819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.781849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.781958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.781989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.782088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.782119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.782302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.782334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.782440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.782470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.782581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.782612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.782792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.782822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.782991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.783034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.783170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.783205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.783374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.783406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.783572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.783602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.783705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.783736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.783901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.783932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.784107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.784138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.784256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.784287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.784468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.784499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.784665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.784695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.784803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.784833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.784934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.784964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.785152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.785196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.785300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.785330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.785455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.785487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.785591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.785621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.785867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.785897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.786015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.786045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.786182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.786215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.786413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.786442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.786551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.786582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.786699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.786730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.786836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.786866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.787059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.787090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.787197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.787228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.787420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.787451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.787630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.787660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.787780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.787826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.787956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.787989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.788188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.945 [2024-12-10 12:36:53.788229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.945 qpair failed and we were unable to recover it. 00:28:31.945 [2024-12-10 12:36:53.788366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.788399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.788514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.788546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.788667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.788704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.788912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.788950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.789065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.789098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.789240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.789273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.789449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.789481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.789587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.789623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.789742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.789774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.789875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.789905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.790020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.790057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.790156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.790198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.790400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.790431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.790547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.790578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.790745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.790776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.790985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.791016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.791206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.791238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.791355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.791385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.791512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.791543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.791656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.791687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.791796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.791826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.791926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.791956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.792071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.792102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.792302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.792355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.792536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.792568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.792676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.792707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.792896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.792926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.793044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.793075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.793198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.793230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.793399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.793431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.793549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.793579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.793693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.793727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.793832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.793863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.794033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.794064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.794177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.794209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.794380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.794411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.794519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.794550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.794749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.794792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.946 qpair failed and we were unable to recover it. 00:28:31.946 [2024-12-10 12:36:53.794907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.946 [2024-12-10 12:36:53.794942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.795061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.795096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.795211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.795242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.795432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.795464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.795585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.795614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.795798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.795828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.795963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.795992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.796106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.796137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.796249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.796279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.796387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.796418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.796525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.796555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.796722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.796752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.796936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.796972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.797218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.797251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.797358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.797388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.797490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.797521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.797623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.797654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.797822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.797851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.798021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.798052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.798181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.798213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.798335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.798365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.798483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.798514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.798630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.798660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.798762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.798793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.798970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.799000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.799175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.799208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.799334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.799365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.799468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.799498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.799596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.799626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.799736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.799767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.799932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.799963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.800071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.800102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.800222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.800253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.800366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.800396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.800563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.800592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.800712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.947 [2024-12-10 12:36:53.800743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.947 qpair failed and we were unable to recover it. 00:28:31.947 [2024-12-10 12:36:53.800912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.800942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.801057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.801087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.801256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.801288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.801463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.801500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.801618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.801650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.801827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.801858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.801967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.801997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.802185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.802217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.802407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.802438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.802647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.802677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.802789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.802819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.802940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.802971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.803145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.803188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.803308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.803339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.803525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.803556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.803664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.803695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.803806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.803836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.803958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.803995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.804094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.804124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.804245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.804276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.804452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.804483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.804607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.804637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.804764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.804794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.804893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.804922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.805036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.805066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.805237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.805270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.805391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.805422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.805588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.805618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.805743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.805774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.805947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.805978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.806092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.806128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.806291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.806360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.806483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.806520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.806631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.948 [2024-12-10 12:36:53.806664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.948 qpair failed and we were unable to recover it. 00:28:31.948 [2024-12-10 12:36:53.806788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.806819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.806949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.806981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.807101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.807131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.807255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.807287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.807394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.807426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.807533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.807564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.807665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.807696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.807866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.807897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.808171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.808203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.808316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.808347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.808457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.808489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.808613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.808645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.808759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.808791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.808965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.808997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.809104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.809135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.809270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.809302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.809407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.809438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.809553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.809584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.809694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.809724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.809827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.809858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.809958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.809990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.810198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.810232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.810347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.810380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.810566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.810599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.810699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.810731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.810909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.810941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.811045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.811078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.811266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.811300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.811429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.811459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.811625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.811656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.811771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.811802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.811902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.811933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.812034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.812065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.812181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.812214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.812451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.812482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.812668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.812699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.812923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.812961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.813083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.813113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.813307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.813339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.949 qpair failed and we were unable to recover it. 00:28:31.949 [2024-12-10 12:36:53.813463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.949 [2024-12-10 12:36:53.813494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.813610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.813641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.813817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.813848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.814032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.814064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.814240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.814272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.814410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.814441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.814614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.814646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.814767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.814798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.814905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.814935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.815102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.815133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.815310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.815342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.815475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.815506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.815633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.815664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.815843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.815874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.815982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.816013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.816116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.816147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.816257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.816287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.816391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.816422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.816595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.816625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.816749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.816779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.816958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.816989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.817098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.817130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.817260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.817301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.817421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.817454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.817569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.817602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.817713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.817744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.817856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.817886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.818057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.818087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.818211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.818243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.818427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.818467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.818684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.818719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.818884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.818914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.819011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.819041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.819215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.819247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.819468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.819497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.819694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.819724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.819822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.819852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.819970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.820008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.820114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.950 [2024-12-10 12:36:53.820144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.950 qpair failed and we were unable to recover it. 00:28:31.950 [2024-12-10 12:36:53.820262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.820309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.820487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.820517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.820634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.820665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.820774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.820804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.820918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.820963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.821097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.821132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.821346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.821381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.821495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.821526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.821710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.821740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.821915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.821960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.822185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.822220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.822332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.822362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.822542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.822573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.822755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.822786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.822956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.822987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.823089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.823119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.823245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.823277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.823452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.823483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.823583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.823613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.823717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.823747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.823926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.823956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.824078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.824107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.824237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.824268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.824381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.824412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.824576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.824606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.824715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.824750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.824951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.824981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.825107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.825138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.825256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.825287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.825392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.825424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.825615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.825645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.825813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.825845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.825946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.825976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.826091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.826121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.826265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.826297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.826414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.826445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.826556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.826587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.826696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.826727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.826831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.951 [2024-12-10 12:36:53.826867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.951 qpair failed and we were unable to recover it. 00:28:31.951 [2024-12-10 12:36:53.826967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.826998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.827186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.827219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.827334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.827365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.827476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.827506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.827616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.827647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.827815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.827845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.827961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.827993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.828118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.828148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.828266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.828297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.828421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.828452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.828644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.828675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.828790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.828821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.829004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.829034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.829147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.829188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.829292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.829323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.829432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.829463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.829564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.829595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.829762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.829794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.829961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.829992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.830102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.830133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.830281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.830313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.830450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.830481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.830653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.830684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.830787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.830819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.831011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.831042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.831173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.831206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.831484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.831553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.831800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.831835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.832030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.832062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.832270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.832303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.832420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.832451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.832570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.832601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.832721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.832752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.832854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.832884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.833122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.833153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.833330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.833361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.833531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.833562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.833729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.833760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.833951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.952 [2024-12-10 12:36:53.833982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.952 qpair failed and we were unable to recover it. 00:28:31.952 [2024-12-10 12:36:53.834224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.834265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.834460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.834492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.834685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.834715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.834880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.834911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.835024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.835055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.835174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.835206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.835333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.835363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.835552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.835583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.835749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.835779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.835947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.835978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.836087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.836118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.836300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.836332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.836544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.836575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.836695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.836725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.836912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.836943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.837062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.837093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.837198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.837230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.837398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.837428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.837619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.837651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.837820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.837850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.838034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.838065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.838194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.838225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.838431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.838461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.838571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.838601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.838700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.838730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.838834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.838865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.838971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.839002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.839261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.839331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.839472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.839507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.839632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.839665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.839780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.839812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.839981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.840013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.840122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.840153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.840286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.953 [2024-12-10 12:36:53.840316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.953 qpair failed and we were unable to recover it. 00:28:31.953 [2024-12-10 12:36:53.840432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.840464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.840576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.840607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.840730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.840761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.840927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.840958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.841067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.841098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.841322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.841354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.841464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.841496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.841624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.841655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.841776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.841808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.841909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.841940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.842103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.842134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.842255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.842286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.842408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.842439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.842628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.842662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.842836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.842867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.842975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.843004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.843109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.843138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.843333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.843364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.843473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.843505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.843675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.843705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.843813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.843850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.844036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.844067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.844188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.844220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.844326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.844357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.844466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.844497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.844677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.844708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.844838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.844869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.845047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.845078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.845244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.845276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.845445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.845476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.845583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.845614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.845796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.845828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.845928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.845958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.846079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.846111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.846253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.846285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.846405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.954 [2024-12-10 12:36:53.846436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.954 qpair failed and we were unable to recover it. 00:28:31.954 [2024-12-10 12:36:53.846601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.846631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.846806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.846837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.846938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.846969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.847070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.847100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.847213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.847245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.847348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.847379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.847478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.847508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.847689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.847720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.847850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.847882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.847979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.848010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.848110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.848141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.848265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.848303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.848404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.848434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.848554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.848585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.848789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.848820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.848937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.848968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.849084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.849114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.849228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.849260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.849370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.849400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.849581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.849612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.849743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.849773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.850009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.850040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.850143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.850186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.850302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.850333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.850499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.850529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.850651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.850683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.850793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.850823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.851085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.851117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.851236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.851268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.851435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.851466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.851566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.851597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.851710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.851741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.851932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.851962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.852062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.852094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.852194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.852226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.852355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.852386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.852487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.852517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.852679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.852711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.852823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.955 [2024-12-10 12:36:53.852859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.955 qpair failed and we were unable to recover it. 00:28:31.955 [2024-12-10 12:36:53.853063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.853095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.853197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.853230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.853402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.853433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.853562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.853592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.853796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.853828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.853993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.854023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.854193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.854224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.854349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.854380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.854478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.854509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.854627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.854658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.854830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.854861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.855038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.855070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.855194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.855225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.855382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.855453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.855650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.855684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.855805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.855837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.855945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.855977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.856080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.856111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.856232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.856266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.856455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.856486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.856748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.856781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.856966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.856997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.857181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.857214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.857387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.857418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.857580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.857610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.857778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.857810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.857910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.857951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.858126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.858171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.858290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.858321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.858487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.858520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.858634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.858665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.858773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.858805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.858908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.858939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.859050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.859082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.859209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.859241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.859349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.859380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.859486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.859517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.859696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.859727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.859828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.859859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.956 [2024-12-10 12:36:53.859975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.956 [2024-12-10 12:36:53.860006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.956 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.860183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.860216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.860341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.860372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.860499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.860530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.860642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.860674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.860777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.860808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.860908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.860939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.861052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.861084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.861210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.861242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.861413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.861445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.861567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.861598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.861723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.861754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.862014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.862046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.862243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.862276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.862466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.862497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.862614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.862645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.862851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.862883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.862996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.863026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.863137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.863186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.863300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.863330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.863441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.863472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.863572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.863603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.863739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.863770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.863881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.863911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.864018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.864050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.864171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.864204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.864304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.864334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.864452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.864488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.864658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.864689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.864800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.864830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.864936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.864967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.865140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.865183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.865296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.865327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.865516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.865546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.957 qpair failed and we were unable to recover it. 00:28:31.957 [2024-12-10 12:36:53.865729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.957 [2024-12-10 12:36:53.865761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.865878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.865908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.866021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.866053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.866170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.866203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.866471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.866503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.866603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.866633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.866828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.866859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.866965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.866996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.867110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.867142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.867262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.867293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.867415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.867447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.867663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.867694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.867806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.867837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.867964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.867995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.868106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.868138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.868397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.868428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.868601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.868632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.868749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.868780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.868986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.869017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.869135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.869178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.869341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.869411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.869535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.869572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.869675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.869706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.869817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.869848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.870020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.870050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.870246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.870277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.870387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.870419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.870535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.870565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.870741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.870772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.870893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.870923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.871035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.871066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.871184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.871217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.871401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.871432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.871548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.871579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.871702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.871734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.871834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.871865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.871969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.872000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.872123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.872154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.872358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.958 [2024-12-10 12:36:53.872389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.958 qpair failed and we were unable to recover it. 00:28:31.958 [2024-12-10 12:36:53.872509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.872540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.872640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.872670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.872776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.872807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.872910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.872940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.873068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.873099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.873298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.873330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.873431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.873463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.873561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.873592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.873713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.873750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.873868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.873901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.874004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.874036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.874137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.874178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.874419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.874451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.874562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.874593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.874787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.874819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.874919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.874953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.875120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.875153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.875312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.875344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.875445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.875477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.875588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.875618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.875733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.875765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.875891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.875922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.876029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.876061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.876173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.876205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.876446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.876478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.876648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.876678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.876781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.876812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.876913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.876944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.877053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.877085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.877218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.877250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.877442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.877472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.877582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.877612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.877753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.877787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.877957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.877989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.878156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.878197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.878384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.878421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.878537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.878568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.878670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.878701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.878820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.878851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.959 [2024-12-10 12:36:53.878965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.959 [2024-12-10 12:36:53.878995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.959 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.879096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.879127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.879270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.879303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.879474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.879505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.879618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.879649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.879752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.879781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.879935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.879967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.880085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.880116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.880233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.880266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.880380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.880411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.880523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.880554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.880752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.880783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.880890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.880922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.881055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.881086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.881202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.881234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.881356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.881386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.881503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.881534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.881634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.881665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.881831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.881862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.882029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.882061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.882192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.882223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.882352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.882383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.882514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.882545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.882650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.882686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.882857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.882889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.883085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.883116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.883252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.883285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.883410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.883442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.883564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.883594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.883698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.883730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.883901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.883931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.884110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.884141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.884319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.884350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.884524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.884554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.884735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.884766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.884945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.884976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.885092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.885122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.885261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.885293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.960 [2024-12-10 12:36:53.885471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.960 [2024-12-10 12:36:53.885502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.960 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.885603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.885633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.885820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.885851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.886088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.886119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.886245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.886277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.886395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.886426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.886622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.886654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.886769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.886799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.886971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.887002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.887129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.887181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.887301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.887331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.887521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.887553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.887654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.887685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.887862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.887893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.888065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.888095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.888198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.888230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.888337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.888368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.888479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.888509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.888652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.888683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.888810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.888840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.888952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.888983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.889148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.889190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.889362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.889393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.889505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.889535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.889703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.889735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.889927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.889958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.890196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.890269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.890510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.890547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.890743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.890775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.890897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.890927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.891042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.891074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.891248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.891280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.891509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.891540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.891648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.891679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.891859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.891895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.892068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.892099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.892222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.892254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.892363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.892394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.892497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.892528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.961 [2024-12-10 12:36:53.892702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.961 [2024-12-10 12:36:53.892743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.961 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.892923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.892955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.893139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.893182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.893322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.893353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.893475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.893506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.893690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.893719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.893891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.893922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.894115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.894145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.894326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.894358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.894465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.894495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.894719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.894751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.894861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.894891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.895001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.895030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.895131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.895173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.895295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.895326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.895511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.895541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.895659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.895689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.895891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.895921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.896118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.896148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.896292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.896342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.896464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.896494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.896671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.896702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.896809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.896839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.897017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.897048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.897181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.897213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.897327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.897357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.897460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.897490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.897605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.897637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.897806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.897836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.898005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.962 [2024-12-10 12:36:53.898035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.962 qpair failed and we were unable to recover it. 00:28:31.962 [2024-12-10 12:36:53.898202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.898234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.898363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.898394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.898500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.898530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.898631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.898663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.898855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.898886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.899009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.899040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.899153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.899194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.899312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.899342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.899512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.899542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.899712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.899743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.899910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.899946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.900128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.900167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.900340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.900371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.900484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.900516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.900641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.900671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.900843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.900874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.901054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.901084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.901255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.901287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.901395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.901425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.901546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.901577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.901678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.901709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.901829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.901859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.901979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.902010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.902185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.902217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.902392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.902423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.902663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.902694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.902808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.902839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.902945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.902976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.903083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.903113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.903229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.903261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.903432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.903462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.903632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.903662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.903904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.903935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.904039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.904069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.904182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.904215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.904385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.904417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.904519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.904549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.904657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.904693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.904814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.963 [2024-12-10 12:36:53.904843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.963 qpair failed and we were unable to recover it. 00:28:31.963 [2024-12-10 12:36:53.904959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.904990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.905096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.905127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.905325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.905395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.905655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.905690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.905814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.905845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.906013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.906045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.906170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.906203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.906398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.906428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.906541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.906571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.906756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.906785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.906980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.907011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.907238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.907271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.907408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.907439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.907612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.907643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.907818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.907848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.908017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.908048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.908228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.908259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.908360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.908390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.908511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.908542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.908647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.908678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.908793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.908823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.908926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.908956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.909062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.909093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.909197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.909229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.909420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.909451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.909577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.909613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.909721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.909752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.909941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.909972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.910084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.910114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.910255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.910287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.910391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.910421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.910552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.910582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.910755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.910786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.910890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.910920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.911118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.911148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.911264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.911294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.911401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.911432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.911545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.911575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.911763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.911794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.964 qpair failed and we were unable to recover it. 00:28:31.964 [2024-12-10 12:36:53.911983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.964 [2024-12-10 12:36:53.912014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.912205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.912236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.912350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.912380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.912550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.912580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.912693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.912723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.912889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.912919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.913022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.913052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.913171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.913203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.913305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.913336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.913502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.913533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.913647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.913677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.913867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.913898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.914065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.914094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.914203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.914240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.914358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.914389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.914492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.914523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.914637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.914666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.914881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.914913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.915014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.915044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.915145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.915187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.915283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.915314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.915408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.915446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.915558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.915588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.915720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.915752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.915942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.915972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.916071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.916102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.916325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.916357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.916537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.916568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.916688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.916718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.916887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.916917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.917046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.917077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.917194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.917225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.917326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.917356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.917460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.917490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.917588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.917618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.917779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.917810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.917935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.917966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.918147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.918188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.918358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.918388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.965 [2024-12-10 12:36:53.918493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.965 [2024-12-10 12:36:53.918523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.965 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.918716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.918752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.918867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.918898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.919136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.919176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.919279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.919310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.919478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.919507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.919618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.919649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.919785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.919815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.919930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.919961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.920126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.920164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.920334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.920365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.920487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.920518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.920693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.920724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.920824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.920854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.921025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.921055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.921224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.921295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.921433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.921470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.921589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.921621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.921820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.921852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.921967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.921999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.922100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.922131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.922266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.922300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.922416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.922447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.922581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.922612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.922734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.922764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.922883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.922913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.923021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.923051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.923215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.923246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.923455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.923484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.923664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.923695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.923878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.923908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.924009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.924040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.924152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.924193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.924316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.924346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.924452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.966 [2024-12-10 12:36:53.924481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.966 qpair failed and we were unable to recover it. 00:28:31.966 [2024-12-10 12:36:53.924669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.924700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.924818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.924848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.925035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.925065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.925215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.925248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.925375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.925405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.925521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.925551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.925665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.925695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.925812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.925847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.925979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.926011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.926116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.926147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.926375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.926407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.926525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.926557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.926762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.926792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.926909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.926941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.927120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.927151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.927277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.927309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.927428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.927459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.927562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.927594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.927776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.927807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.927923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.927955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.928195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.928236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.928432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.928463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.928568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.928600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.928700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.928731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.928905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.928936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.929103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.929134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.929314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.929346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.929474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.929504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.929623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.929654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.929823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.929853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.930020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.930051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.930173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.930206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.930452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.930483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.930678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.930709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.930816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.930847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.930955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.930985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.931154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.931196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.931365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.931396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.967 [2024-12-10 12:36:53.931512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.967 [2024-12-10 12:36:53.931543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.967 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.931657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.931688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.931795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.931826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.931991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.932022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.932141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.932182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.932299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.932330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.932441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.932472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.932587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.932617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.932787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.932817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.933055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.933125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.933364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.933401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.933612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.933644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.933745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.933776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.933997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.934029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.934131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.934175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.934291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.934322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.934438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.934470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.934631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.934662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.934763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.934794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.934930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.934961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.935124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.935155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.935430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.935462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.935599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.935639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.935745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.935776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.935887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.935918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.936016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.936047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.936179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.936212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.936342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.936373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.936561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.936591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.936810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.936842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.937020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.937050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.937168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.937200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.937368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.937399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.937511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.937542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.937667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.937697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.937813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.937844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.937960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.937991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.968 [2024-12-10 12:36:53.938183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.968 [2024-12-10 12:36:53.938218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.968 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.938392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.938424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.938613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.938644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.938765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.938796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.938964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.938996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.939185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.939217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.939391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.939422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.939594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.939625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.939800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.939833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.940035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.940065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.940252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.940284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.940388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.940419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.940582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.940652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.940817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.940886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.941026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.941066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.941194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.941227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.941341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.941373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.941481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.941514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.941630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.941661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.941831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.941864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.942038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.942070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.942192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.942235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.942359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.942391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.942653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.942686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.942799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.942831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.942935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.942980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.943109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.943146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.943337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.943375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.943485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.943518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.943629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.943660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.943831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.943862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.943973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.944011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.944187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.944220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.944336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.944368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.944482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.944512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.944633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.944664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.944833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.944864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.944979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.945010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.945138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.945183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.969 qpair failed and we were unable to recover it. 00:28:31.969 [2024-12-10 12:36:53.945380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.969 [2024-12-10 12:36:53.945412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.945592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.945622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.945738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.945769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.945871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.945901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.946009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.946039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.946169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.946201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.946384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.946416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.946519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.946548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.946670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.946701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.946822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.946852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.946954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.946985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.947112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.947143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.947259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.947293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.947423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.947461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.947580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.947612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.947730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.947760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.947950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.947982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.948175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.948215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.948400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.948433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.948545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.948576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.948674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.948704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.948807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.948838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.948943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.948973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.949088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.949118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.949235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.949267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.949373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.949403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.949503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.949536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.949649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.949680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.949777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.949807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.949906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.949937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.950181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.950214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.950386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.950416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.950523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.950554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.950748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.950783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.950906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.950937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.951058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.951088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.951199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.951231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.951331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.951361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.951469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.951500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.970 [2024-12-10 12:36:53.951683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.970 [2024-12-10 12:36:53.951714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.970 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.951821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.951863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.951981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.952012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.952137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.952179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.952279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.952309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.952478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.952510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.952630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.952661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.952925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.952956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.953058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.953088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.953356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.953389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.953503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.953534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.953643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.953674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.953870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.953900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.954072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.954103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.954282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.954324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.954456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.954488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.954612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.954642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.954742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.954772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.954878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.954908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.955075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.955107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.955241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.955273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.955398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.955432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.955549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.955580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.955686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.955717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.955945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.955976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.956086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.956116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.956299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.956333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.956447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.956482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.956683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.956728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.956826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.956857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.956983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.957020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.957125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.957155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.957378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.957408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.957520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.957551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.971 [2024-12-10 12:36:53.957725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.971 [2024-12-10 12:36:53.957760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.971 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.958006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.958039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.958236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.958270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.958414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.958445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.958570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.958600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.958724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.958755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.958925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.958959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.959078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.959109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.959304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.959337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.959444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.959475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.959593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.959624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.959737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.959768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.959888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.959918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.960021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.960052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.960156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.960214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.960323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.960354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.960544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.960574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.960684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.960714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.960831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.960863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.961062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.961093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.961278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.961309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.961427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.961465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.961581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.961611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.961710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.961742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.961849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.961880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.961992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.962022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.962140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.962183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.962355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.962386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.962496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.962527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.962651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.962682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.962855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.962885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.963094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.963124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.963240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.963272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.963397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.963428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.963597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.963627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.963806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.963876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.964070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.964105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.964243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.964276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.964394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.964425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.964528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.972 [2024-12-10 12:36:53.964558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.972 qpair failed and we were unable to recover it. 00:28:31.972 [2024-12-10 12:36:53.964773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.964804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.964971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.965002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.965118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.965149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.965332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.965363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.965603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.965635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.965756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.965789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.965896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.965927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.966031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.966064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.966231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.966273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.966393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.966424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.966530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.966560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.966692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.966723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.966904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.966933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.967034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.967065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.967173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.967211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.967316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.967346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.967541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.967572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.967757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.967787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.967972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.968003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.968216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.968254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.968429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.968459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.968575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.968607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.968715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.968745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.968923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.968953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.969065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.969095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.969199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.969231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.969336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.969366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.969466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.969498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.969690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.969724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.969905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.969936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.970182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.970215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.970331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.970362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.970468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.970499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.970630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.970661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.970833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.970868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.971037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.971107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.971330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.971367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.971488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.971519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.971638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.973 [2024-12-10 12:36:53.971669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.973 qpair failed and we were unable to recover it. 00:28:31.973 [2024-12-10 12:36:53.971786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.971817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.972076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.972107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.972316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.972349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.972515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.972544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.972662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.972694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.972884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.972915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.973031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.973061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.973188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.973220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.973342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.973374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.973495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.973535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.973650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.973681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.973803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.973834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.974008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.974039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.974144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.974184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.974304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.974335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.974446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.974477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.974588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.974619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.974725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.974755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.974870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.974900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.975008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.975038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.975142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.975183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.975353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.975383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.975673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.975704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.975907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.975937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.976124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.976156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.976361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.976392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.976496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.976526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.976693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.976723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.976895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.976926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.977027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.977057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.977259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.977291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.977419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.977451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.977643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.977673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.977785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.977816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.977916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.977947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.978073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.978104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.978353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.978423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.974 [2024-12-10 12:36:53.978574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.974 [2024-12-10 12:36:53.978612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.974 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.978721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.978752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.978926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.978956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.979060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.979092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.979204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.979237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.979344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.979374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.979580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.979612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.979790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.979821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.979932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.979962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.980143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.980187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.980308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.980340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.980441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.980472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.980663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.980693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.980877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.980909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.981012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.981042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.981206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.981238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.981417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.981448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.981560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.981590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.981716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.981746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.981863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.981894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.982023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.982054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.982171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.982205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.982322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.982353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.982472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.982503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.982687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.982718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.982910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.982940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.983135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.983183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.983317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.983348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.983453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.983484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.983655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.983685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.983805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.975 [2024-12-10 12:36:53.983837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.975 qpair failed and we were unable to recover it. 00:28:31.975 [2024-12-10 12:36:53.984048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.984078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.984247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.984278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.984407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.984438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.984569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.984599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.984712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.984743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.984931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.984963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.985069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.985101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.985213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.985245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.985412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.985442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.985546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.985577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.985720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.985751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.985876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.985906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.986018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.986050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.986154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.986196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.986314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.986344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.986460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.986491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.986609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.986640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.986743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.986773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.986943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.986974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.987150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.987193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.987392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.987422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.987535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.987565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.987661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.987698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.987829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.987859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.988027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.988058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.988224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.988256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.988430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.988460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.988630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.988662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.988778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.988808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.988937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.988968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.989094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.989124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.976 [2024-12-10 12:36:53.989247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.976 [2024-12-10 12:36:53.989278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.976 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.989401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.989431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.989552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.989584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.989694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.989724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.989833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.989865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.989987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.990018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.990192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.990223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.990354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.990385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.990490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.990520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.990624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.990655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.990863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.990893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.991016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.991046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.991235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.991267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.991367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.991398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.991507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.991537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.991706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.991736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.991836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.991867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.991984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.992015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.992188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.992225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.992405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.992437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.992536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.992565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.992732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.992763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.993046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.993078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.993182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.993212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.993313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.993344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.993462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.993492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.993615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.993645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.993817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.993847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.993946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.993978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.994076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.994106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.994281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.994313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.994549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.994580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.994732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.994802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.994933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.994969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.995083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.995113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.995304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.995337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.995518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.995548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.995716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.995747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.995862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.977 [2024-12-10 12:36:53.995893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.977 qpair failed and we were unable to recover it. 00:28:31.977 [2024-12-10 12:36:53.996064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.996097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.996362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.996394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.996520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.996551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.996668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.996699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.996810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.996841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.997017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.997050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.997219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.997267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.997447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.997477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.997599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.997629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.997809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.997840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.997956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.997986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.998102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.998133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.998266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.998297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.998403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.998433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.998556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.998588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.998802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.998833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.999005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.999035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.999142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.999183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.999286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.999316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.999414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.999445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.999636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.999672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:53.999849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:53.999881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:54.000000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:54.000032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:54.000168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:54.000201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:54.000420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:54.000455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:54.000626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:54.000658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:54.000777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:54.000809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:54.000913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:54.000944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:54.001042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:54.001073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:54.001253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.978 [2024-12-10 12:36:54.001287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.978 qpair failed and we were unable to recover it. 00:28:31.978 [2024-12-10 12:36:54.001410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.001441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.001614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.001645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.001888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.001919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.002031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.002069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.002198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.002230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.002347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.002378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.002570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.002601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.002867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.002899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.003021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.003053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.003171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.003202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.003323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.003355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.003480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.003512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.003632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.003663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.003836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.003867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.003980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.004010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.004115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.004146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.004270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.004301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.004410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.004442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.004559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.004590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.004759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.004790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.004890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.004921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.005033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.005064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.005232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.005265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.005504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.005535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.005714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.005745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.005864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.005896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.006017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.006049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.006249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.006280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.006386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.006417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.006530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.006562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.006800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.006838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.006952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.006983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.007099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.007131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.007330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.007361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.007472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.007503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.007673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.007706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.007808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.007839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.007955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.007986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.008232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.979 [2024-12-10 12:36:54.008265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.979 qpair failed and we were unable to recover it. 00:28:31.979 [2024-12-10 12:36:54.008375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.008406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.008526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.008557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.008662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.008696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.008812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.008841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.008952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.008983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.009099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.009130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.009350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.009421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.009602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.009678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.009808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.009855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.010052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.010085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.010265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.010298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.010417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.010450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.010626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.010657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.010762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.010794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.010897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.010932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.011046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.011079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.011243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.011275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.011391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.011425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.011523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.011562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.011735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.011766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.011959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.011997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.012129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.012169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.012338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.012370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.012482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.012512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.012629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.012660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.012782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.012812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.012980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.013012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.013201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.013235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.013408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.013439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.013542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.013574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.013700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.013731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.013929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.013960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.980 qpair failed and we were unable to recover it. 00:28:31.980 [2024-12-10 12:36:54.014135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.980 [2024-12-10 12:36:54.014177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.014279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.014310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.014493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.014524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.014622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.014653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.014792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.014824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.014949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.014981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.015096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.015126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.015304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.015341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.015528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.015558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.015680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.015711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.015833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.015863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.015975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.016005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.016188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.016220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.016399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.016434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.016608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.016640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.016758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.016789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.016917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.016947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.017134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.017177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.017353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.017384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.017561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.017593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.017695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.017726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.017831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.017862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.017976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.018007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.018184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.018217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.018338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.018371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.018553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.018583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.018706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.018744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.018927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.981 [2024-12-10 12:36:54.018958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.981 qpair failed and we were unable to recover it. 00:28:31.981 [2024-12-10 12:36:54.019069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.019104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.019235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.019267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.019380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.019410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.019523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.019554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.019729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.019759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.019860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.019891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.020079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.020111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.020235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.020267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.020389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.020420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.020549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.020580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.020756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.020787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.020964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.020997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.021103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.021134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.021342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.021373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.021493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.021524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.021641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.021671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.021787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.021824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.021935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.021967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.022071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.022102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.022239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.022271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.022393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.022422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.022590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.022621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.022792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.022829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.022959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.022990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.023094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.023125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.023374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.023444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.023643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.023678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.023867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.023899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.024091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.024122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.024255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.024288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.024401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.024431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.024545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.024576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.024686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.024717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.024830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.024861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.025041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.025071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.025253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.025285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.025395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.025426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.025532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.025562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.982 [2024-12-10 12:36:54.025665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.982 [2024-12-10 12:36:54.025705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.982 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.025843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.025873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.026045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.026075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.026266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.026298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.026412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.026442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.026548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.026578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.026770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.026801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.026918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.026948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.027074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.027105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.027239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.027271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.027437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.027467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.027639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.027670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.027837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.027868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.028048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.028078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.028202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.028234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.028356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.028386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.028560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.028591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.028712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.028742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.028851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.028882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.028981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.029011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.029178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.029210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.029378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.029409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.029589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.029620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.029724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.029754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.029946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.029976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.030174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.030206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.030321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.030353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.030546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.030600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.030804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.030841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.030965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.031008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.031205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.031243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.031398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.031437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.031617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.983 [2024-12-10 12:36:54.031656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.983 qpair failed and we were unable to recover it. 00:28:31.983 [2024-12-10 12:36:54.031776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.031808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.031936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.031968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.032093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.032126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.032268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.032303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.032497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.032528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.032724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.032755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.032856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.032886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.033015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.033051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.033165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.033198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.033304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.033335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.033464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.033501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.033677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.033708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.033808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.033838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.034023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.034054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.034156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.034196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.034304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.034335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.034566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.034597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.034765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.034795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.034921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.034952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.035054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.035085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.035195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.035226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.035424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.035455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.035577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.035607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.035720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.035751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.035864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.035893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.036009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.036040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.036206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.036240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.036349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.036378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.036488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.036518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.036625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.036655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.036844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.036874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.037005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.037035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.037207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.037238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.037480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.037511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.037611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.037647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.037757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.037787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.037899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.037930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.984 [2024-12-10 12:36:54.038037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.984 [2024-12-10 12:36:54.038068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.984 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.038234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.038265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.038449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.038480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.038583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.038613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.038737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.038768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.038963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.038993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.039117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.039148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.039284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.039314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.039484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.039515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.039684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.039715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.039822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.039853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.039977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.040007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.040127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.040169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.040341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.040371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.040473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.040504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.040691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.040722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.040916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.040946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.041059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.041089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.041272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.041305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.041423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.041453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.041562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.041592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.041701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.041732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.041906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.041937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.042044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.042074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.042282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.042315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.042428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.042458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.042633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.042663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.042828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.042858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.042980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.043011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.043130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.043168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.043433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.043464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.043598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.043629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.043741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.043771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.043885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.043914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.044024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.044055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.044180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.044211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.044400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.985 [2024-12-10 12:36:54.044431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.985 qpair failed and we were unable to recover it. 00:28:31.985 [2024-12-10 12:36:54.044547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.044583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.044696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.044726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.044833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.044863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.045050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.045080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.045184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.045215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.045334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.045364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.045473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.045502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.045607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.045638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.045877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.045908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.046023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.046052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.046171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.046204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.046308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.046338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.046447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.046477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.046579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.046610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.046800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.046831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.046932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.046963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.047091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.047121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.047262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.047295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.047468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.047498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.047669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.047700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.047812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.047843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.048014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.048045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.048236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.048268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.048439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.048469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.048569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.048598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.048698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.048728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.048841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.048872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.049053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.049084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.049194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.986 [2024-12-10 12:36:54.049226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.986 qpair failed and we were unable to recover it. 00:28:31.986 [2024-12-10 12:36:54.049329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.049361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.049477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.049507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.049607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.049637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.049746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.049776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.049895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.049925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.050102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.050132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.050306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.050337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.050441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.050471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.050639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.050669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.050779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.050809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.051003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.051034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.051222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.051259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.051436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.051466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.051568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.051599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.051766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.051798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.051915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.051946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.052116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.052147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.052362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.052393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.052586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.052617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.052722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.052752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.052939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.052971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.053083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.053113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.053228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.053260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.053386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.053417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.053530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.053560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.053709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.053737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.053834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.053861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.054099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.054127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.054253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.054283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.054447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.054475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.054680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.054708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.054873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.054902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.055070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.055099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.055228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.055256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.055363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.055392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.055511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.055538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.055699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.055727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.055823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.055851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.056017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.056045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.056275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.056303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.056620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.056648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.987 [2024-12-10 12:36:54.056750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.987 [2024-12-10 12:36:54.056778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.987 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.056943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.056972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.057082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.057110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.057226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.057255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.057427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.057454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.057558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.057586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.057679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.057706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.057868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.057896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.057995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.058023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.058111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.058140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.058249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.058283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.058483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.058512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.058616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.058643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.058767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.058795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.058891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.058919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.059014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.059042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.059166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.059196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.059377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.059405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.059566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.059594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.059691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.059719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.059879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.059907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.060009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.060037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.060155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.060196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.060310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.060338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.060514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.060542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.060724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.060753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.060849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.060877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.060986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.061014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.061130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.061168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.061280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.061307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.061411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.061439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.061554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.061581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.061757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.061784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.061955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.061983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.062110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.062138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.062341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.062371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.062581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.062609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.062713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.062740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.062922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.062950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.063128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.063167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.063278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.063306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.063403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.063431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.988 [2024-12-10 12:36:54.063531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.988 [2024-12-10 12:36:54.063558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.988 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.063662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.063690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.063882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.063910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.064076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.064104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.064219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.064255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.064372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.064400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.064496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.064523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.064710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.064739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.064839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.064874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.065043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.065071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.065182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.065211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.065340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.065368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.065470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.065499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.065627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.065654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.065767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.065796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.066031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.066058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.066223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.066252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.066366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.066394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.066509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.066537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.066630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.066657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.066749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.066777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.066885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.066912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.067018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.067047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.067212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.067241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.067494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.067521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.067629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.067656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.067775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.067803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.067905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.067933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.068028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.068056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.068261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.068289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.068383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.068410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.068569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.068597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.068765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.068792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.068955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.068983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.069077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.069105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.069322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.069351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.069542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.069569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.069666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.069694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.069811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.069838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.069936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.069964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.070074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.070102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.070220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.070250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.070480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.989 [2024-12-10 12:36:54.070508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.989 qpair failed and we were unable to recover it. 00:28:31.989 [2024-12-10 12:36:54.070676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.070704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.070867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.070894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.071052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.071080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.071320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.071349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.071456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.071484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.071597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.071630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.071751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.071779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.071883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.071910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.072011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.072039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.072226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.072256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.072351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.072378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.072536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.072564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.072733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.072761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.072924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.072952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.073113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.073141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.073264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.073293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.073398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.073425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.073688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.073716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.073878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.073905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.074089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.074117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.074232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.074261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.074437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.074465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.074560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.074587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.074773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.074803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.074911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.074942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.075121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.075152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.075376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.075404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.075499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.075527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.075718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.075747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.075912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.075939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.076033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.076060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.076173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.076202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.076383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.076412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.076588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.076616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.076709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.076738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.076847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.990 [2024-12-10 12:36:54.076874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.990 qpair failed and we were unable to recover it. 00:28:31.990 [2024-12-10 12:36:54.077107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-10 12:36:54.077134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-10 12:36:54.077313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-10 12:36:54.077341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-10 12:36:54.077516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-10 12:36:54.077543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-10 12:36:54.077641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-10 12:36:54.077668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-10 12:36:54.077852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-10 12:36:54.077880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-10 12:36:54.078051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-10 12:36:54.078078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-10 12:36:54.078197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-10 12:36:54.078228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:31.991 [2024-12-10 12:36:54.078330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:31.991 [2024-12-10 12:36:54.078357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:31.991 qpair failed and we were unable to recover it. 00:28:32.274 [2024-12-10 12:36:54.078443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.274 [2024-12-10 12:36:54.078471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.078668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.078701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.078799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.078827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.078990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.079017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.079132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.079166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.079350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.079378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.079541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.079568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.079702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.079729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.079897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.079925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.080029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.080056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.080169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.080198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.080369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.080396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.080505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.080534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.080628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.080656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.080764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.080792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.080988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.081016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.081116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.081145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.081329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.081357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.081456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.081483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.081654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.081681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.081776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.081805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.081914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.081942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.082102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.082129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.082444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.082527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.082753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.082798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.083121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.083181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.083339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.083380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.083573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.083619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.083830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.083877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.084009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.084051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.084259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.084308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.084468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.084510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.084724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.084772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.084891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.084923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.085043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.085074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.085244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.085276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.085516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.085548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.085645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.085675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.085863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.275 [2024-12-10 12:36:54.085894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.275 qpair failed and we were unable to recover it. 00:28:32.275 [2024-12-10 12:36:54.086057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.086087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.086271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.086301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.086412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.086452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.086625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.086656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.086786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.086821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.087057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.087087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.087255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.087286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.087472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.087502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.087666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.087696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.087813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.087843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.088014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.088044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.088214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.088248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.088366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.088398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.088517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.088550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.088723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.088755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.088867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.088899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.089076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.089109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.089291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.089325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.089491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.089523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.089644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.089676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.089777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.089809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.089924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.089956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.090125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.090169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.090274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.090307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.090476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.090508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.090677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.090709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.090925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.090970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.091188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.091223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.091351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.091381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.091508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.091548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.091753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.091785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.091947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.091979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.092087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.092120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.092303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.092335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.092465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.092497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.092605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.092637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.092737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.092768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.092939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.092971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.276 [2024-12-10 12:36:54.093072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.276 [2024-12-10 12:36:54.093104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.276 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.093283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.093316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.093488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.093519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.093780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.093812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.093979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.094023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.094209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.094242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.094410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.094443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.094573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.094604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.094769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.094802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.094917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.094949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.095131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.095172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.095345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.095377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.095564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.095596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.095706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.095737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.095924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.095956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.096192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.096226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.096326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.096356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.096472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.096504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.096623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.096656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.096774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.096805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.096921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.096953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.097131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.097170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.097276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.097308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.097475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.097506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.097623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.097654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.097823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.097855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.098117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.098148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.098340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.098372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.098538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.098570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.098825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.098856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.098966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.098998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.099177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.099211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.099422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.099454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.099660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.099692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.099831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.099863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.099977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.100009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.100182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.100217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.100417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.100450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.100640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.100671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.277 qpair failed and we were unable to recover it. 00:28:32.277 [2024-12-10 12:36:54.100853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.277 [2024-12-10 12:36:54.100885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.101003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.101034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.101146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.101188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.101317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.101349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.101600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.101632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.101803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.101841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.101949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.101982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.102115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.102147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.102261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.102292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.102387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.102419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.102585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.102617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.102786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.102818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.102934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.102966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.103090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.103121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.103232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.103265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.103454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.103487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.103594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.103626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.103863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.103895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.104137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.104180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.104340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.104377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.104548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.104578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.104744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.104777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.104948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.104980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.105104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.105136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.105252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.105285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.105451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.105483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.105652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.105684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.105855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.105886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.106093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.106124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.106367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.106401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.106529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.106561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.109392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.109428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.109634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.109664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.109831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.109864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.109985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.110017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.110190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.110223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.110331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.110363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.110572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.110603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.110771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.278 [2024-12-10 12:36:54.110803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.278 qpair failed and we were unable to recover it. 00:28:32.278 [2024-12-10 12:36:54.110929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.110960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.111125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.111156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.111338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.111370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.111573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.111606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.111724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.111756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.111924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.111955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.112135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.112202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.112306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.112339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.112522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.112554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.112671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.112703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.112874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.112906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.113017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.113049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.113217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.113251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.113432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.113464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.113576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.113609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.113709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.113741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.113937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.113969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.114152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.114193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.114368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.114400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.114588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.114619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.114726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.114758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.114964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.114997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.115174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.115208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.115389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.115421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.115683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.115715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.115900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.115932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.116050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.116081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.116198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.116232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.116352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.116383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.116498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.116530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.116633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.116664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.116781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.116813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.117019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.279 [2024-12-10 12:36:54.117051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.279 qpair failed and we were unable to recover it. 00:28:32.279 [2024-12-10 12:36:54.117227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.117261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.117363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.117395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.117566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.117597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.117702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.117734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.117911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.117943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.118065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.118096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.118220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.118253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.118491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.118523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.118643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.118675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.118863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.118895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.119061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.119093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.119260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.119294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.119470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.119501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.119695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.119733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.119870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.119903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.120013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.120043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.120153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.120198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.120322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.120354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.120536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.120567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.120805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.120837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.121026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.121058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.121251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.121284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.121398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.121431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.121550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.121582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.121762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.121793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.121984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.122015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.122132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.122188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.122484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.122516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.122685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.122718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.122831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.122862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.123034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.123067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.123259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.123293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.123469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.123500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.123601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.123634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.123920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.123952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.124136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.124192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.124365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.124397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.124501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.124533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.124776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.280 [2024-12-10 12:36:54.124809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.280 qpair failed and we were unable to recover it. 00:28:32.280 [2024-12-10 12:36:54.124924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.124956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.125119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.125208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.125419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.125456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.125581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.125614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.125826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.125858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.126030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.126063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.126300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.126335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.126522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.126554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.126736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.126768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.126942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.126981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.127090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.127123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.127237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.127270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.127583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.127615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.127833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.127866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.128052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.128095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.128311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.128345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.128461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.128493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.128773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.128806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.129037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.129070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.129246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.129280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.129396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.129429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.129608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.129641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.129823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.129856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.129964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.129997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.130198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.130232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.130471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.130504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.130766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.130797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.131061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.131094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.131281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.131316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.131485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.131518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.131785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.131817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.131984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.132017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.132134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.132179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.132351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.132384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.132566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.132599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.132837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.132870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.133136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.133176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.133348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.133381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.133553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.281 [2024-12-10 12:36:54.133585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.281 qpair failed and we were unable to recover it. 00:28:32.281 [2024-12-10 12:36:54.133891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.133924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.134052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.134084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.134197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.134232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.134498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.134531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.134797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.134829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.135012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.135044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.135306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.135339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.135454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.135485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.135651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.135684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.135875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.135907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.136095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.136128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.136310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.136343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.136561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.136594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.136761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.136793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.136998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.137030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.137224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.137258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.137380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.137412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.137579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.137612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.137800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.137833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.138083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.138115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.138311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.138344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.138583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.138616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.138783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.138814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.139075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.139108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.139286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.139320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.139442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.139474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.139640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.139672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.139933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.139967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.140072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.140104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.140312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.140347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.140467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.140499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.140758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.140791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.140905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.140937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.141223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.141257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.141505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.141537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.141733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.141765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.142032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.142065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.142174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.142208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.142470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.282 [2024-12-10 12:36:54.142503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.282 qpair failed and we were unable to recover it. 00:28:32.282 [2024-12-10 12:36:54.142673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.142706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.142817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.142849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.143023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.143056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.143227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.143268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.143442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.143474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.143657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.143689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.143799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.143832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.144097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.144130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.144356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.144389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.144586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.144619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.144799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.144832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.145094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.145128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.145329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.145363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.145530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.145563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.145759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.145792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.145983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.146015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.146178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.146214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.146395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.146428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.146678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.146710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.146937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.146969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.147076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.147108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.147302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.147336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.147597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.147630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.147917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.147951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.148222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.148258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.148500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.148531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.148702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.148735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.148908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.148941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.149124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.149167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.149416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.149449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.149643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.149676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.149933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.149966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.150186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.150220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.150392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.150423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.283 [2024-12-10 12:36:54.150614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.283 [2024-12-10 12:36:54.150646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.283 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.150832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.150864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.151132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.151172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.151429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.151461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.151757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.151790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.152052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.152084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.152389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.152422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.152677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.152710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.152898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.152931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.153206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.153246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.153362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.153395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.153569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.153602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.153843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.153877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.154095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.154127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.154267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.154301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.154461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.154493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.154619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.154652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.154847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.154879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.155144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.155185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.155373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.155405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.155659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.155692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.155862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.155894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.156010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.156044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.156287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.156322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.156614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.156647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.156771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.156803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.156989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.157022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.157208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.157242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.157427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.157460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.157758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.157791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.157985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.158018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.158277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.158311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.158527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.158560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.158728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.158761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.158953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.158987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.159099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.159131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.159351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.159386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.159584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.159617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.159787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.159820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.284 qpair failed and we were unable to recover it. 00:28:32.284 [2024-12-10 12:36:54.159990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.284 [2024-12-10 12:36:54.160023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.160206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.160251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.160364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.160397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.160567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.160599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.160769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.160802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.160970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.161003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.161189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.161222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.161474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.161507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.161794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.161827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.162016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.162049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.162309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.162349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.162469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.162501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.162739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.162773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.162884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.162917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.163088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.163121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.163248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.163281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.163453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.163486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.163695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.163728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.163936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.163969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.164256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.164290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.164509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.164542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.164661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.164693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.164955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.164989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.165184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.165218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.165412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.165446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.165630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.165663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.165835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.165867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.166056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.166089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.166264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.166298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.166557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.166589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.166758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.166791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.166999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.167031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.167301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.167335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.167441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.167473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.167658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.167691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.167940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.167974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.168148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.168202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.168389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.168422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.168602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.168633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.285 qpair failed and we were unable to recover it. 00:28:32.285 [2024-12-10 12:36:54.168834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.285 [2024-12-10 12:36:54.168867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.169063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.169095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.169350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.169384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.169663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.169695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.169814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.169846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.170015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.170047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.170222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.170256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.170431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.170474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.170748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.170782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.171093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.171126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.171330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.171364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.171557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.171601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.171797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.171830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.172094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.172127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.172252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.172285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.172399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.172431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.172625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.172658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.172921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.172954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.173172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.173206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.173426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.173459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.173632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.173665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.173835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.173868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.174054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.174087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.174340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.174374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.174666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.174699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.174890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.174923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.175046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.175079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.175343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.175377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.175669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.175702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.175903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.175936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.176136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.176196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.176393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.176425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.176628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.176662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.176887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.176919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.177170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.177204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.177399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.177432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.177627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.177659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.177833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.177866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.177975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.178009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.286 qpair failed and we were unable to recover it. 00:28:32.286 [2024-12-10 12:36:54.178274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.286 [2024-12-10 12:36:54.178307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.178553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.178586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.178708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.178741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.178864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.178897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.179170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.179205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.179392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.179426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.179599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.179631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.179812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.179845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.180074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.180107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.180295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.180329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1790435 Killed "${NVMF_APP[@]}" "$@" 00:28:32.287 [2024-12-10 12:36:54.180572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.180604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.180815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.180848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:32.287 [2024-12-10 12:36:54.181113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.181146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.181398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.181431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:32.287 [2024-12-10 12:36:54.181537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.181570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.181741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.181775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:32.287 [2024-12-10 12:36:54.181959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.181992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.182113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.287 [2024-12-10 12:36:54.182146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.182344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.182377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.287 [2024-12-10 12:36:54.182552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.182587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.182846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.182878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.183089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.183122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.183308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.183342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.183543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.183576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.183799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.183832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.184020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.184053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.184241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.184276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.184570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.184603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.184887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.184919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.185193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.185229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.185516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.185549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.185726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.185759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.185882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.185915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.186177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.186209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.186411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.186442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.186652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.186685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.186807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.287 [2024-12-10 12:36:54.186844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.287 qpair failed and we were unable to recover it. 00:28:32.287 [2024-12-10 12:36:54.187111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.187143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.187426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.187460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.187584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.187616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.187740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.187775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.187985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.188017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.188211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.188254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.188511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.188553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.188747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.188779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.188898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.188930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1791151 00:28:32.288 [2024-12-10 12:36:54.189204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.189238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.189411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.189444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1791151 00:28:32.288 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:32.288 [2024-12-10 12:36:54.189626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.189662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1791151 ']' 00:28:32.288 [2024-12-10 12:36:54.189940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.189977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.190183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.190224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.288 [2024-12-10 12:36:54.190440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.190472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.288 [2024-12-10 12:36:54.190598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.190632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.190820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.288 [2024-12-10 12:36:54.190856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.191049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.191088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.288 [2024-12-10 12:36:54.191280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.191318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.288 [2024-12-10 12:36:54.191563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.191597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.191902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.191939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.192192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.192231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.192422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.192455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.192698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.192729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.192911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.192943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.193211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.193245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.193491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.193525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.193712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.193744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.193942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.193975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.194240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.288 [2024-12-10 12:36:54.194273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.288 qpair failed and we were unable to recover it. 00:28:32.288 [2024-12-10 12:36:54.194453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.194485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.194699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.194731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.194902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.194935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.195131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.195175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.195441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.195474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.195654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.195686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.195959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.195991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.196285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.196318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.196609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.196642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.196920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.196950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.197129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.197189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.197392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.197423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.197608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.197639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.197835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.197866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.198130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.198177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.198316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.198348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.198520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.198553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.198724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.198755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.199060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.199092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.199339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.199370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.199610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.199641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.199836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.199866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.200107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.200139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.200270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.200302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.200485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.200516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.200649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.200680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.200981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.201013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.201232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.201277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.201529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.201562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.201814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.201845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.202070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.202100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.202243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.202284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.202485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.202519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.202695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.202727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.202909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.202950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.203202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.203234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.203415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.203447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.203618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.203660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.203834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.289 [2024-12-10 12:36:54.203865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.289 qpair failed and we were unable to recover it. 00:28:32.289 [2024-12-10 12:36:54.204152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.204202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.204399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.204432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.204607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.204642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.204832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.204865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.204973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.205005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.205143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.205201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.205409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.205443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.205643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.205674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.205797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.205828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.206025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.206056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.206248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.206280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.206555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.206586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.206866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.206898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.207072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.207104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.207286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.207319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.207432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.207463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.207651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.207683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.207875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.207908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.208044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.208074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.208211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.208247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.208423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.208455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.208661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.208692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.208871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.208903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.209144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.209202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.209544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.209581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.209847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.209879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.210088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.210118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.210432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.210464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.210712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.210744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.210874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.210905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.211077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.211108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.211238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.211279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.211473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.211524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.211789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.211821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.211963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.211993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.212259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.212293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.212504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.212535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.212831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.212862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.290 qpair failed and we were unable to recover it. 00:28:32.290 [2024-12-10 12:36:54.213108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.290 [2024-12-10 12:36:54.213139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.213444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.213475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.213684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.213715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.213906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.213936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.214175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.214208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.214384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.214415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.214714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.214867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.214898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.215178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.215210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.215473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.215504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.215643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.215674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.215872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.215904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.216208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.216245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.216431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.216463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.216709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.216741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.216914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.216945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.217216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.217248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.217375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.217405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.217602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.217634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.217815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.217847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.218042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.218073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.218348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.218381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.218679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.218709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.218966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.218998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.219293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.219326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.219526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.219558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.219683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.219716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.219888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.219919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.220174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.220207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.220417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.220449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.220623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.220654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.220918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.220950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.221182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.221219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.221347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.221379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.221673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.221717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.221908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.221939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.222057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.222088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.222298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.222331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.222507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.291 [2024-12-10 12:36:54.222538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.291 qpair failed and we were unable to recover it. 00:28:32.291 [2024-12-10 12:36:54.222709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.222739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.222935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.222967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.223137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.223195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.223467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.223498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.223694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.223726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.223836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.223868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.224042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.224073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.224211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.224250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.224545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.224577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.224852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.224886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.225068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.225099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.225295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.225328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.225622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.225654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.225869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.225900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.226040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.226072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.226214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.226252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.226437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.226474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.226597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.226638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.226774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.226805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.226980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.227011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.227192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.227225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.227364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.227398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.227673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.227707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.227825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.227856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.227971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.228002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.228201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.228236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.228359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.228392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.228624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.228659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.228785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.228816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.228930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.228961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.229084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.229119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.229295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.229368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.229620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.229689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.229881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.229916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.292 [2024-12-10 12:36:54.230098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.292 [2024-12-10 12:36:54.230130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.292 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.230325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.230367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.230546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.230579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.230791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.230823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.231005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.231038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.231147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.231193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.231407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.231439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.231548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.231579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.231772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.231803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.231926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.231958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.232136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.232178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.232402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.232435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.232553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.232584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.232695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.232727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.232903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.232940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.233145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.233196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.233415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.233446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.233640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.233672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.233795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.233827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.233955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.233986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.234113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.234144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.234278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.234317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.234432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.234463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.234655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.234685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.234860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.234893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.235068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.235104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.235231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.235263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.235375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.235405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.235565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.235616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.235895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.235929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.236105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.236137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.236323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.236355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.236468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.236500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.236607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.236638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.236835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.236866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.236968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.236999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.237172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.237204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.237328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.237362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.237484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.237514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.293 [2024-12-10 12:36:54.237633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.293 [2024-12-10 12:36:54.237665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.293 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.237793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.237824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.237995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.238027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.238210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.238244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.238361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.238392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.238589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.238621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.238735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.238766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.238935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.238967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.239209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.239241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.239345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.239375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.239480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.239511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.239692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.239724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.239839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.239869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.239997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.240028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.240144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.240185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.240241] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:32.294 [2024-12-10 12:36:54.240284] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.294 [2024-12-10 12:36:54.240381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.240411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.240582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.240612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.240828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.240858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.241037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.241067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.241262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.241294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.241476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.241508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.241719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.241751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.241997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.242028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.242251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.242284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.242456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.242488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.242666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.242697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.242878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.242909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.243027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.243058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.243186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.243219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.243509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.243541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.243645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.243676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.243849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.243882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.244143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.244182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.244358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.244391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.244593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.244625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.244805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.244836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.244949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.244980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.245191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.245223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.245415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.294 [2024-12-10 12:36:54.245448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.294 qpair failed and we were unable to recover it. 00:28:32.294 [2024-12-10 12:36:54.245567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.245598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.245721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.245752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.245855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.245886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.246077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.246117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.246322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.246365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.246501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.246535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.246645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.246678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.246825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.246858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.247057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.247090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.247291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.247327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.247552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.247583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.247791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.247822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.247928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.247959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.248082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.248113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.248331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.248365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.248559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.248591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.248790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.248820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.249031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.249063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.249233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.249266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.249386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.249423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.249527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.249557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.249749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.249781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.249954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.249985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.250178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.250210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.250382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.250414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.250592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.250624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.250740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.250771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.250870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.250902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.251016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.251046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.251323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.251355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.251491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.251531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.251676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.251708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.251924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.251962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.252171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.252205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.252324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.252355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.252537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.252573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.252759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.252794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.252989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.253020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.253207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.295 [2024-12-10 12:36:54.253243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.295 qpair failed and we were unable to recover it. 00:28:32.295 [2024-12-10 12:36:54.253532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.253565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.253770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.253802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.253921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.253953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.254084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.254115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.254299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.254339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.254448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.254486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.254609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.254641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.254911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.254943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.255048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.255078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.255268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.255301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.255525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.255557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.255677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.255708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.255922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.255953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.256074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.256107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.256329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.256361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.256486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.256517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.256715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.256752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.256860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.256892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.257072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.257104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.257352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.257385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.257559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.257590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.257728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.257760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.257936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.257970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.258142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.258183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.258386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.258418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.258549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.258581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.258697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.258728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.258856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.258888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.259025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.259056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.259254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.259287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.259480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.259512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.259693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.259733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.259854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.259892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.259994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.260028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.260177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.260214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.260330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.260368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.260534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.260566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.260759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.260790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.260958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.260991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.296 qpair failed and we were unable to recover it. 00:28:32.296 [2024-12-10 12:36:54.261115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.296 [2024-12-10 12:36:54.261147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.261330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.261369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.261496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.261544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.261667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.261698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.261870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.261903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.262096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.262142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.262355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.262396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.262578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.262608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.262781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.262814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.262987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.263017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.263136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.263187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.263314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.263347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.263494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.263528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.263636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.263667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.263785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.263817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.264018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.264051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.264229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.264265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.264483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.264522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.264702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.264732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.264918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.264950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.265121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.265152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.265287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.265322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.265567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.265597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.265721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.265752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.265853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.265884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.265988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.266019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.266191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.266225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.266340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.266371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.266550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.266582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.266691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.266723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.266962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.266992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.267105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.267136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.267408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.267478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.267696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.267735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.297 [2024-12-10 12:36:54.267927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.297 [2024-12-10 12:36:54.267959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.297 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.268098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.268129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.268277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.268314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.268445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.268477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.268599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.268631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.268749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.268780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.268893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.268923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.269027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.269058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.269269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.269302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.269474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.269505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.269618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.269648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.269820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.269852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.270100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.270131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.270317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.270348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.270518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.270549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.270744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.270775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.270963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.270994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.271108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.271138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.271342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.271373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.271502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.271534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.271715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.271745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.271914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.271946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.272117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.272147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.272325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.272357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.272469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.272500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.272695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.272727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.272898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.272929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.273097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.273129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.273251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.273282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.273479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.273511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.273610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.273641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.273812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.273843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.274019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.274050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.274153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.274197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.274370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.274400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.274570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.274601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.274700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.274729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.274859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.274891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.275062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.298 [2024-12-10 12:36:54.275100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.298 qpair failed and we were unable to recover it. 00:28:32.298 [2024-12-10 12:36:54.275234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.275267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.275408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.275438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.275633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.275666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.275775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.275806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.275928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.275958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.276126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.276166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.276346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.276377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.276561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.276591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.276695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.276727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.276895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.276925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.277096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.277127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.277413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.277453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.277640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.277672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.277785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.277817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.277935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.277966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.278083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.278119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.278342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.278375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.278552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.278583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.278691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.278722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.278848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.278897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.279065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.279095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.279300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.279332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.279503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.279533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.279703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.279733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.279901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.279932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.280178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.280216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.280394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.280428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.280598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.280628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.280914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.280946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.281058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.281087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.281205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.281237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.281478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.281509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.281632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.281662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.281765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.281796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.282002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.282033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.282137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.282187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.282359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.282390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.282500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.282530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.282714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.299 [2024-12-10 12:36:54.282745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.299 qpair failed and we were unable to recover it. 00:28:32.299 [2024-12-10 12:36:54.282851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.282882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.283068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.283098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.283297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.283329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.283500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.283531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.283698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.283729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.283939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.283970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.284095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.284125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.284257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.284288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.284403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.284433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.284654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.284686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.284852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.284881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.284997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.285028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.285197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.285229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.285428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.285459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.285672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.285704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.285805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.285835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.286008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.286039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.286278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.286309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.286479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.286510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.286693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.286724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.286835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.286865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.286992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.287022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.287253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.287284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.287459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.287490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.287682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.287712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.287901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.287932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.288101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.288131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.288309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.288345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.288604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.288635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.288764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.288813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.289029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.289058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.289199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.289234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.289446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.289477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.289662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.289693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.289884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.289913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.290080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.290111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.290288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.290320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.290510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.290540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.290642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.300 [2024-12-10 12:36:54.290671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.300 qpair failed and we were unable to recover it. 00:28:32.300 [2024-12-10 12:36:54.290792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.290824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.290997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.291027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.291203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.291235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.291346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.291376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.291547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.291578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.291677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.291707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.291904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.291935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.292046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.292076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.292197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.292228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.292346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.292376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.292549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.292579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.292744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.292776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.292941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.292971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.293177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.293209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.293415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.293446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.293570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.293602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.293773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.293802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.293981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.294012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.294196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.294228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.294341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.294372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.294542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.294572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.294767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.294799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.295029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.295059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.295247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.295278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.295448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.295479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.295643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.295674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.295788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.295819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.295923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.295953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.296144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.296190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.296364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.296395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.296592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.296624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.296738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.296769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.296869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.296899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.297089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.297120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.301 [2024-12-10 12:36:54.297334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.301 [2024-12-10 12:36:54.297367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.301 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.297535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.297566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.297783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.297813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.297982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.298013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.298203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.298238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.298442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.298473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.298586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.298615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.298732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.298764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.298889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.298920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.299096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.299127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.299262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.299304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.299423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.299455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.299556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.299587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.299716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.299747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.299933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.299964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.300242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.300276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.300460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.300490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.300617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.300648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.300834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.300865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.300967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.300998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.301115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.301146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.301288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.301336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.301526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.301561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.301685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.301716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.301921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.301951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.302061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.302090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.302225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.302257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.302387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.302419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.302529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.302559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.302680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.302710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.302849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.302879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.303046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.303077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.303313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.303345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.303513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.303544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.303656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.303692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.303945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.303978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.304147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.304188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.304379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.304411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.304617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.304648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.302 [2024-12-10 12:36:54.304854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.302 [2024-12-10 12:36:54.304884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.302 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.305003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.305033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.305151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.305192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.305361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.305392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.305511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.305541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.305659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.305689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.305960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.305992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.306186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.306218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.306401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.306431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.306602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.306633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.306805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.306836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.307006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.307036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.307170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.307203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.307338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.307368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.307477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.307508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.307613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.307643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.307813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.307845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.308009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.308040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.308298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.308330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.308517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.308548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.308721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.308751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.308936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.308967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.309086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.309122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.309270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.309303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.309519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.309551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.309727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.309758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.309873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.309904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.310070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.310101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.310228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.310260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.310453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.310484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.310725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.310756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.310873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.310904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.311020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.311052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.311220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.311253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.311460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.311492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.311598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.311644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.311843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.311875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.311986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.312017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.312132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.312173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.312303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.303 [2024-12-10 12:36:54.312335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.303 qpair failed and we were unable to recover it. 00:28:32.303 [2024-12-10 12:36:54.312573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.312603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.312770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.312800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.312904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.312934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.313044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.313074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.313360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.313392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.313580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.313610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.313720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.313749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.313862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.313893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.314065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.314096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.314234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.314268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.314459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.314489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.314659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.314690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.314792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.314823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.314932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.314963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.315151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.315196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.315315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.315347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.315460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.315491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.315601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.315630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.315812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.315843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.316013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.316044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.316231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.316264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.316451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.316482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.316708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.316757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.316880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.316913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.317015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.317047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.317218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.317252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.317423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.317455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.317570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.317602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.317783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.317814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.317930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.317962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.318076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.318107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.318317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.318349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.318519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.318550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.318732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.318763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.319021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.319053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.319166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.319207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.319317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.319348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.319448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.319480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.319653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.304 [2024-12-10 12:36:54.319685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.304 qpair failed and we were unable to recover it. 00:28:32.304 [2024-12-10 12:36:54.319854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.319885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.320125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.320167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.320290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.320322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.320438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.320469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.320681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.320713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.320916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.320949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.321056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.321086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.321256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.321290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.321459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.321490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.321593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.321624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.321741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.321773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.322010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.322042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.322155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.322196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.322361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.322393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.322453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:32.305 [2024-12-10 12:36:54.322504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.322534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.322725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.322757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.322932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.322963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.323182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.323215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.323336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.323367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.323583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.323614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.323780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.323812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.323996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.324027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.324194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.324228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.324481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.324513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.324628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.324659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.324921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.324952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.325142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.325184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.325289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.325319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.325511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.325543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.325717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.325748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.325863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.325895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.326022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.326054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.326177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.326210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.326313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.326345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.326445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.326476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.326646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.326678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.326846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.326884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.327121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.327153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.305 [2024-12-10 12:36:54.327337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.305 [2024-12-10 12:36:54.327370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.305 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.327541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.327573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.327821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.327853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.327953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.327985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.328083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.328115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.328315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.328350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.328522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.328554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.328734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.328767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.328936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.328969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.329144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.329188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.329367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.329399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.329639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.329671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.329791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.329824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.329932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.329964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.330080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.330112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.330302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.330334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.330524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.330557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.330731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.330764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.330946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.330978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.331188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.331222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.331396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.331428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.331595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.331627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.331831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.331865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.331994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.332027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.332142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.332186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.332324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.332358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.332465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.332497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.332600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.332632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.332827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.332860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.333029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.333060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.333239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.333272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.333455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.333488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.333620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.333652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.333766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.333798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.306 [2024-12-10 12:36:54.333970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.306 [2024-12-10 12:36:54.334003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.306 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.334120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.334153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.334333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.334365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.334468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.334500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.334767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.334805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.334926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.334957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.335080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.335111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.335265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.335300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.335413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.335445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.335638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.335669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.335932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.335964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.336075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.336107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.336223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.336255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.336445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.336478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.336668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.336701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.336802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.336833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.337002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.337034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.337144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.337187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.337397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.337429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.337595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.337626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.337741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.337772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.337882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.337914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.338089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.338121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.338336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.338369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.338468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.338499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.338663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.338695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.338862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.338893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.339056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.339087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.339259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.339294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.339536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.339566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.339747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.339778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.339895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.339927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.340213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.340245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.340412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.340444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.340615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.340646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.340839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.340872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.340997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.341029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.341200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.341233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.341335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.341367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.307 [2024-12-10 12:36:54.341570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.307 [2024-12-10 12:36:54.341603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.307 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.341714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.341746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.341982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.342013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.342127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.342167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.342286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.342317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.342445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.342482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.342616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.342648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.342756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.342787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.342956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.342987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.343154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.343196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.343309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.343341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.343441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.343472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.343607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.343639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.343880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.343912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.344016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.344047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.344178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.344211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.344392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.344423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.344542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.344574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.344748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.344779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.344914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.344946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.345071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.345101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.345283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.345316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.345434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.345466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.345640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.345671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.345857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.345888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.346088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.346119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.346239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.346272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.346462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.346492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.346690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.346721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.346894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.346926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.347110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.347142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.347354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.347387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.347628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.347660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.347763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.347794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.347916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.347947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.348063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.348095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.348265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.348299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.348433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.348465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.348672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.348705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.348804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.308 [2024-12-10 12:36:54.348836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.308 qpair failed and we were unable to recover it. 00:28:32.308 [2024-12-10 12:36:54.348942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.348973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.349148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.349191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.349359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.349392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.349512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.349543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.349731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.349763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.349895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.349933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.350040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.350072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.350260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.350294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.350402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.350434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.350631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.350663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.350829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.350861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.350990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.351022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.351231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.351264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.351439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.351471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.351642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.351673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.351867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.351898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.352101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.352133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.352310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.352344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.352595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.352626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.352745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.352776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.352945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.352977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.353099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.353130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.353335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.353387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.353573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.353605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.353778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.353809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.353921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.353952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.354068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.354099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.354242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.354276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.354446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.354476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.354667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.354697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.354808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.354838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.355002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.355032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.355287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.355327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.355437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.355469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.355649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.355681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.355850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.355881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.355982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.356013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.356182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.356214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.356422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.309 [2024-12-10 12:36:54.356454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.309 qpair failed and we were unable to recover it. 00:28:32.309 [2024-12-10 12:36:54.356622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.356653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.356779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.356811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.356913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.356944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.357121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.357152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.357271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.357303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.357470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.357501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.357670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.357707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.357913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.357944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.358058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.358089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.358230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.358262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.358430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.358461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.358630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.358660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.358852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.358885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.359054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.359086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.359219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.359251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.359513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.359545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.359663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.359693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.359862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.359895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.360090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.360123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.360249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.360288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.360407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.360441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.360622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.360656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.360935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.360967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.361209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.361243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.361350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.361383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.361539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.361574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.361699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.361731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.361859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.361893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.362011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.362044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.362310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.362345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.362568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.362602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.362741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.362774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.310 qpair failed and we were unable to recover it. 00:28:32.310 [2024-12-10 12:36:54.362966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.310 [2024-12-10 12:36:54.362995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.310 [2024-12-10 12:36:54.363003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.310 [2024-12-10 12:36:54.363012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.310 [2024-12-10 12:36:54.363018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.310 [2024-12-10 12:36:54.363053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.310 [2024-12-10 12:36:54.363084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.363270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.363304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.363429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.363460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.363576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.363606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.363780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.363810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.363912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.363944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.364070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.364101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.364302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.364334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.364462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.364495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.364620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.364654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.364772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.364683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:32.311 [2024-12-10 12:36:54.364807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.364789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:32.311 [2024-12-10 12:36:54.364899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:32.311 [2024-12-10 12:36:54.364981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.365013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 [2024-12-10 12:36:54.364900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.365213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.365245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.365416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.365451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.365556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.365590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.365761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.365795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.366007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.366041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.366148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.366194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.366371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.366405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.366586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.366620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.366791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.366824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.367003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.367038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.367212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.367246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.367417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.367450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.367571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.367604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.367793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.367827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.368027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.368060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.368240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.368276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.368475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.368508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.368613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.368646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.368816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.368851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.369040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.369075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.369263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.369298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.369496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.369530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.369774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.369808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.369925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.369968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.370085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.370119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.370249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.311 [2024-12-10 12:36:54.370282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.311 qpair failed and we were unable to recover it. 00:28:32.311 [2024-12-10 12:36:54.370466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.370499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.370618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.370650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.370771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.370803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.370971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.371003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.371201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.371236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.371405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.371438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.371563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.371596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.371719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.371750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.371923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.371956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.372213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.372248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.372435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.372468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.372584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.372617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.372786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.372819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.372991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.373030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.373151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.373205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.373328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.373361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.373476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.373510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.373627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.373660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.373769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.373802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.373909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.373942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.374053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.374087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.374201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.374234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.374424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.374455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.374663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.374697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.374799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.374831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.375001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.375034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.375152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.375201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.375379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.375412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.375582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.375615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.375746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.375779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.375955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.375987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.376104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.376137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.376418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.376451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.376626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.376658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.376773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.376805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.376955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.376989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.377100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.377133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.377275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.377309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.377506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.377539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.312 qpair failed and we were unable to recover it. 00:28:32.312 [2024-12-10 12:36:54.377666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.312 [2024-12-10 12:36:54.377699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.377834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.377867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.378037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.378073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.378246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.378280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.378399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.378431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.378619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.378654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.378832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.378865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.378987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.379021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.379193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.379227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.379403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.379439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.379564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.379597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.379769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.379802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.379992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.380025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.380276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.380312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.380554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.380593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.380796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.380829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.381021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.381055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.381253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.381289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.381533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.381566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.381682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.381716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.381886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.381918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.382153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.382200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.382374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.382407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.382522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.382554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.382665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.382698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.383001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.383034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.383205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.383240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.383475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.383507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.383631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.383664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.383783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.383815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.384075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.384109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.384239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.384272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.384467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.384500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.384614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.384647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.384834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.384866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.385062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.385095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.385269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.385304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.385511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.385544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.385736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.385769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.313 qpair failed and we were unable to recover it. 00:28:32.313 [2024-12-10 12:36:54.385983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.313 [2024-12-10 12:36:54.386016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.386261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.386295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.386565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.386624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.386821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.386856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.387022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.387056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.387183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.387217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.387401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.387434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.387628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.387660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.387774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.387806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.387973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.388005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.388204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.388236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.388406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.388438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.388652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.388685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.388874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.388908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.389076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.389108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.389287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.389330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.389514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.389546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.389690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.389722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.389838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.389870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.389978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.390011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.390123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.390154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.390283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.390318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.390490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.390522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.390706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.390739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.390913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.390945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.391060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.391093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.391270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.391306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.391487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.391522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.391696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.391731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.391905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.391943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.392223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.392261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.392551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.392589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.392822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.392859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.393066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.393103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.393267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.393304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.393494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.393529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.393649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.393684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.393875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.393908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.314 qpair failed and we were unable to recover it. 00:28:32.314 [2024-12-10 12:36:54.394096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.314 [2024-12-10 12:36:54.394129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.394298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.394333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.394517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.394550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.394747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.394779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.394994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.395054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.395295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.395347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.395488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.395530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.395705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.395737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.395939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.395971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.396091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.396124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.396329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.396366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.396665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.396698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.396955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.396987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.397282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.397316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.397440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.397473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.397711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.397744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.397958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.397990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.398110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.398149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.398344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.398378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.398549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.398581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.398850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.398883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.399120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.399151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.399281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.399315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.399503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.399535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.399650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.399682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.399873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.399906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.400072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.400105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.400217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.400250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.400420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.400452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.400579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.400611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.400731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.400764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.400881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.400913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.401010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.401043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.401226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.401260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.401362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.401393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.315 [2024-12-10 12:36:54.401495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.315 [2024-12-10 12:36:54.401528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.315 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.401714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.401747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.401860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.401892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.401986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.402019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.402185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.402219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.402338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.402371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.402611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.402643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.402814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.402847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.402967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.403001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.403112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.403149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.403335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.403369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.403553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.403586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.403713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.403746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.403862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.403894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.404010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.404043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.404210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.404244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.404413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.404446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.404618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.404651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.404818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.404850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.405099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.405132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.405316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.405348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.405463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.405494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.405615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.405655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.405826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.405859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.406027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.406059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.406230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.406264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.406469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.406503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.406671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.406704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.406978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.407011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.407203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.407237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.407408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.407440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.407708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.407741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.407856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.407889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.408126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.408169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.408340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.408373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.408576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.408614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.408893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.408930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.409200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.409235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.409503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.409535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.316 [2024-12-10 12:36:54.409735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.316 [2024-12-10 12:36:54.409768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.316 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.409940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.409972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.410227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.410261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.410444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.410477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.410592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.410624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.410884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.410916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.411036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.411069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.411234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.411268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.411406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.411439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.411638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.411680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.411888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.411933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.412064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.412098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.412290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.412325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.412525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.412557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.412662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.412695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.412881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.412914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.413038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.413071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.413243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.413279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.413559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.413594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.413719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.413752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.413939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.413972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.414169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.414204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.414373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.414407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.414579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.414613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.414743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.414775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.414981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.415014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.415147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.415192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.415367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.415402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.415572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.415606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.415722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.415755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.317 qpair failed and we were unable to recover it. 00:28:32.317 [2024-12-10 12:36:54.415939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.317 [2024-12-10 12:36:54.415980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.416226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.416263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.416464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.416507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.416714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.416766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.416978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.417032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.417244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.417280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.417546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.417580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.417757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.417791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.417910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.417943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.418084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.418116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.418242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.418275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.418474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.418506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.418632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.418663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.418833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.418865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.419045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.419078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.419250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.419284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.419413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.419448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.419618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.419650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.419766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.419798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.419921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.419953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.420069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.420102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.420287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.420321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.420500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.420532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.420643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.420673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.420874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.420906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.421116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.421148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.421397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.589 [2024-12-10 12:36:54.421431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.589 qpair failed and we were unable to recover it. 00:28:32.589 [2024-12-10 12:36:54.421689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.421721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.421891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.421923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.422115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.422148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.422335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.422368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.422564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.422597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.422835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.422867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.423135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.423195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.423377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.423410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.423646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.423678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.423795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.423827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.423949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.423981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.424180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.424214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.424479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.424511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.424693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.424725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.424917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.424950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.425066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.425098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.425371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.425404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.425576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.425608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.425810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.425842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.425957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.425989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.426167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.426205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.426421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.426453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.426575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.426608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.426784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.426816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.426985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.427017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.427281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.427315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.427560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.427592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.427705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.427738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.427910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.427943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.428167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.428201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.428315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.428345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.428586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.428617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.428914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.428946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.429121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.429153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.429432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.429465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.429652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.429684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.429986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.430018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.430210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.590 [2024-12-10 12:36:54.430244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.590 qpair failed and we were unable to recover it. 00:28:32.590 [2024-12-10 12:36:54.430528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.430560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.430752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.430784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.430891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.430922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.431033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.431065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.431367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.431401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.431571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.431604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.431842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.431875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.432058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.432089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.432263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.432297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.432493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.432526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.432639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.432671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.432848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.432880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.432994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.433026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.433231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.433264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.433440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.433472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.433672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.433705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.433900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.433932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.434103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.434135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.434272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.434306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.434496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.434529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.434696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.434728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.434898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.434930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.435106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.435145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.435345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.435378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.435546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.435578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.435683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.435716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.435897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.435930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.436100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.436132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.436405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.436439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.436624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.436655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.436757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.436787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.436903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.436933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.437047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.437079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.437254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.437288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.437449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.437481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.437694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.437726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.437969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.438001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.438240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.438273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.438449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.591 [2024-12-10 12:36:54.438482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.591 qpair failed and we were unable to recover it. 00:28:32.591 [2024-12-10 12:36:54.438639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.438671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.438851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.438884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.439095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.439127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.439287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.439341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.439531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.439564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.439682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.439714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.439841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.439873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.440074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.440107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.440345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.440378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.440577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.440609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.440883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.440914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.441038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.441069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.441260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.441295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.441559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.441591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.441812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.441844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.442030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.442061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.442301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.442336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.442508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.442539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.442800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.442832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.443009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.443041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.443142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.443183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.443449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.443482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.443672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.443705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.443878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.443910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.444137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.444178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.444300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.444334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.444523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.444555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.444824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.444856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.445047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.445079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.445265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.445299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.445414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.445446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.445683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.445715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.446003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.446035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.446247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.446280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.446540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.446573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.446688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.446720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.446838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.446871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.447061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.447098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.447406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.447439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.592 qpair failed and we were unable to recover it. 00:28:32.592 [2024-12-10 12:36:54.447612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.592 [2024-12-10 12:36:54.447643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.447906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.447938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.448048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.448077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.448184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.448215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.448478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.448510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.448779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.448810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.449100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.449132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.449421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.449454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.449691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.449723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.449894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.449927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.450097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.450143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.450369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.450409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.450580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.450612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.450877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.450908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.451097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.451130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.451325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.451357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.451526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.451558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.451739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.451771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.451959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.451991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.452172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.452205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.452403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.452435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.452644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.452677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.452882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.452915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.453117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.453149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.453329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.453363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.453537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.453569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.453853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.453885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.454075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.454107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.454290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.454323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.454585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.454617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.454817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.454850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.454964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.454993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.593 [2024-12-10 12:36:54.455243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.593 [2024-12-10 12:36:54.455277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.593 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.455480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.455512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.455628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.455661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.455867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.455899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.456026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.456058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.456238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.456272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.456485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.456544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.456837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.456880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.457169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.457203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.457328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.457359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.457530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.457562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.457736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.457767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.458067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.458099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.458301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.458335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.458505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.458543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.458712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.458745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.459035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.459067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.459252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.459286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.459458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.459491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.459613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.459646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.459826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.459858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.460124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.460156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.460358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.460391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.460503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.460533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.460722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.460756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.460948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.460993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.461258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.461291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.461481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.461514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.461686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.461720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.461915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.461946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.462116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.462148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.462397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.462430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.462661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.462693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.462953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.462992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.463280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.463313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.463606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.463639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.463828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.463860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.464044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.464076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.464314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.594 [2024-12-10 12:36:54.464348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.594 qpair failed and we were unable to recover it. 00:28:32.594 [2024-12-10 12:36:54.464468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.464500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.464774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.464805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.465060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.465094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.465319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.465354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.465536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.465568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.465832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.465864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.466033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.466065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.466253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.466291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.466480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.466512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.466711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.466743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.467024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.467056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.467252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.467287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.467510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.467546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.467744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.467777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.467890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.467923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.468189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.468222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.468394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.468427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.468615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.468648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.468894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.468926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.469151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.469192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.469299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.469332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.469627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.469659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.469861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.469894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.595 [2024-12-10 12:36:54.470068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.470101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.470276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.470309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:32.595 [2024-12-10 12:36:54.470512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.470544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.470715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.595 [2024-12-10 12:36:54.470745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.470931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.470962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.595 [2024-12-10 12:36:54.471131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.471185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.595 [2024-12-10 12:36:54.471382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.471415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.471583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.471614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.471724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.471755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.471950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.471984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.472116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.472148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.472375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.472406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.472608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.472640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.472812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.595 [2024-12-10 12:36:54.472845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.595 qpair failed and we were unable to recover it. 00:28:32.595 [2024-12-10 12:36:54.473132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.473175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.473368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.473400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.473570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.473602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.473724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.473756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.473922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.473954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.474061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.474096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.474246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.474279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.474541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.474573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.474687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.474723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.474905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.474939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.475213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.475248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.475445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.475478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.475595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.475627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.475804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.475836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.475966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.475998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.476119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.476152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.476348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.476381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.476670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.476702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.476978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.477010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.477280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.477313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.477502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.477535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.477776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.477817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.477939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.477972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.478167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.478201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.478391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.478425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.478539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.478573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.478800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.478833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.479081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.479114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.479357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.479391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.479603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.479635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.479753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.479785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.479902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.479934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.480137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.480180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.480318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.480351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.480536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.480569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.480700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.480732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.596 [2024-12-10 12:36:54.480998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.596 [2024-12-10 12:36:54.481032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.596 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.481206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.481242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.481356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.481388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.481516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.481549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.481670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.481703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.481974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.482008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.482139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.482183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.482450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.482483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.482672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.482706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.482882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.482920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.483100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.483136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.483320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.483353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.483639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.483693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.483899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.483954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.484147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.484191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.484418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.484451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.484695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.484727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.484893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.484926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.485033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.485065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.485187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.485219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.485340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.485371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.485499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.485530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.485772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.485802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.485969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.486000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.486115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.486146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.486267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.486298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.486488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.486519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.486658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.486690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.486811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.486843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.487019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.487051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.487223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.487258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.487441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.487474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.487576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.487606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.487715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.487746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.487858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.487890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.488060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.488091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.488288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.488324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.488455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.488488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.488604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.488636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.597 [2024-12-10 12:36:54.488824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.597 [2024-12-10 12:36:54.488855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.597 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.488984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.489017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.489188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.489222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.489335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.489367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.489476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.489509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.489677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.489708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.489960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.489992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.490104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.490137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.490337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.490371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.490559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.490592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.490714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.490746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.491008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.491040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.491207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.491240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.491409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.491447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.491582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.491614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.491856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.491890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.492133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.492175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.492288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.492320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.492579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.492613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.492728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.492760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.493021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.493054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.493249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.493282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.493456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.493488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.493672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.493703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.493821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.493852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.494131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.494173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.494297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.494328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.494458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.494489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.494780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.494811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.494922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.494954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.495087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.495119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.495255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.495290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.495509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.495542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.495666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.495697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.495874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.598 [2024-12-10 12:36:54.495906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.598 qpair failed and we were unable to recover it. 00:28:32.598 [2024-12-10 12:36:54.496073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.496106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.496296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.496332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.496444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.496475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.496607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.496641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.496827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.496861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.497171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.497204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.497425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.497458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.497700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.497733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.497846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.497880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.498068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.498099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.498383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.498418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.498668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.498700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.498809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.498840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.498967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.498998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.499118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.499150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.499287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.499319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.499435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.499469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.499661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.499693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.499818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.499855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.499968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.500001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.500104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.500136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.500318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.500349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.500518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.500551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.500653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.500684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.500801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.500833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.501019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.501052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.501181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.501213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.501324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.501357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.501460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.501493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.501607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.501639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.501827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.501861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.502059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.502092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.502277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.502311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.502449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.502481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.502610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.502642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.502813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.502846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.502957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.502991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.503106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.503137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.503285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.599 [2024-12-10 12:36:54.503319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.599 qpair failed and we were unable to recover it. 00:28:32.599 [2024-12-10 12:36:54.503428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.503461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.503632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.503665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.503792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.503823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.503995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.504027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.504143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.504186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.504301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.504332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.504519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.504551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.504676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.504709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.504820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.504852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.504970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.505002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.505179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.505212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.505382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.505413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.505526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.505557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.505677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.505709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.505823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.505854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.505978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.506011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.506180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.506214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.506385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.506416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.506547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.600 [2024-12-10 12:36:54.506580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.506842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.506874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:32.600 [2024-12-10 12:36:54.507080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.507114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.507270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.507305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.600 [2024-12-10 12:36:54.507420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.507451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.507569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.507602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.600 [2024-12-10 12:36:54.507712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.507743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.507864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.507896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.508079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.508112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.508226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.508257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.508366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.508398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.508582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.508615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.508717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.508749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.508953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.508985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.509175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.509207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.509327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.509360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.509468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.509499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.509603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.509635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.600 [2024-12-10 12:36:54.509759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.600 [2024-12-10 12:36:54.509790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.600 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.509903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.509935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.510114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.510147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.510277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.510309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.510420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.510452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.510575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.510608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.510711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.510743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.510912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.510944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.511144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.511187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.511446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.511478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.511600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.511632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.511828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.511859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.512121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.512155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.512407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.512440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.512555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.512587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.512705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.512737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.512958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.512990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.513115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.513147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.513397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.513429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.513628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.513660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.513774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.513806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.514028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.514066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.514245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.514278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.514520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.514552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.514776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.514808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.514920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.514951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.515090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.515122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.515260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.515295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.515463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.515495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.515607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.515640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.515890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.515922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.516210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.516243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.516503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.516536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.516708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.516740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.516922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.516955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.517210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.517244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.517435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.517468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.517658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.517690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.517804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.517836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.601 [2024-12-10 12:36:54.517953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.601 [2024-12-10 12:36:54.517986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.601 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.518091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.518123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.518286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.518320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.518439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.518469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.518660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.518692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.518872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.518904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.519110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.519142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.519287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.519320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.519499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.519531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.519642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.519673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.519849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.519882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.519994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.520026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.520200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.520234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.520441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.520474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.520735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.520766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.520958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.520990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.521165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.521199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.521367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.521399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.521579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.521611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.521711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.521743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.521939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.521971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.522173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.522207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.522343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.522380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.522643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.522676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.522862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.522894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.523065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.523097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.523312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.523345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.523521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.523553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.523795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.523828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.523997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.524029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.524223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.524258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.524394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.524426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.524595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.524627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.524902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.524934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.525049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.525081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.602 [2024-12-10 12:36:54.525206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.602 [2024-12-10 12:36:54.525239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.602 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.525432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.525465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.525636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.525668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.525854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.525887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.526058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.526088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.526404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.526434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.526606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.526636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.526826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.526855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.527030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.527059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.527230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.527260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.527450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.527479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.527655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.527684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.527801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.527830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.527996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.528024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.528212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.528242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.528424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.528453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.528692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.528722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.528889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.528918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.529109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.529139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.529349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.529398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.529591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.529646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.529964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.529998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.530125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.530169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.530413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.530445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.530728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.530760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.531031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.531069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.531246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.531281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.531467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.531513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.531683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.531715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.531981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.532016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.532328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.532365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.532489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.532522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.532766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.532797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.532995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.533029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.533151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.533195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.533301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.533334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.533575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.533607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.533731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.533762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.534028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.534060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.603 [2024-12-10 12:36:54.534339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.603 [2024-12-10 12:36:54.534372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.603 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.534486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.534518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.534784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.534816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.534985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.535017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.535254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.535287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.535409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.535440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.535637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.535668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.535926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.535958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.536127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.536167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.536273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.536306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.536437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.536468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.536636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.536667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.536844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.536876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.537134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.537177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.537366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.537398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88cc000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.537599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.537651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.537857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.537896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.538071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.538102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.538376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.538410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.538591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.538623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.538868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.538899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.539169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.539203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.539309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.539342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.539533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.539566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.539736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.539770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.539972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.540004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.540225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.540260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.540448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.540481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.540744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.540785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.541066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.541099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.541367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.541400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.541691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.541729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.541991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.542028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.542212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.542246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.542358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.542389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.542572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.542607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.542779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.542811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.542930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.542964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.543297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.543340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.604 [2024-12-10 12:36:54.543666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.604 [2024-12-10 12:36:54.543700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.604 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.543873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.543905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.544203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.544237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.544420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.544454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.544648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.544680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.544850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.544883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.545151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.545194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 Malloc0 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.545464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.545497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.545670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.545703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.605 [2024-12-10 12:36:54.545963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.545996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.546285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.546319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.546534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.546571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.605 [2024-12-10 12:36:54.546718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.546750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.546868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.546901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.605 [2024-12-10 12:36:54.547069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.547108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.547315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.547350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.547463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.547498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.547626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.547660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.547927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.547960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.548144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.548188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.548373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.548404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.548601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.548633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.548738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.548770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.548884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.548916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.549178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.549212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.549419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.549451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.549553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.549582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.549841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.549874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.550059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.550091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.550284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.550318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.550509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.550541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.550711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.550742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.550935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.550967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.551256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.551289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.551479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.551511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.551797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.551830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.551948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.551981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.552167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.552201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.605 [2024-12-10 12:36:54.552370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.605 [2024-12-10 12:36:54.552401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.605 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.552572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.552604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.552704] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.606 [2024-12-10 12:36:54.552708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.552740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.553016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.553055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.553253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.553287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.553476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.553508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.553645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.553677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.553944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.553979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.554221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.554255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.554372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.554404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.554614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.554645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.554813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.554845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.555045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.555077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.555263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.555298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.555412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.555442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.555633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.555664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.555923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.555956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.556148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.556190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.556363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.556398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.556502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.556533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.556795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.556827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.557124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.557170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.557364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.557397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.557572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.557604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.557778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.557810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.557938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.557969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.558187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.558222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.558419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.558451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.558622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.558656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.558843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.558876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.559120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.559169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.559488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.559522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.559767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.559800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.560089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.560122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.560307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.560341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.560604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.560638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.560808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.560840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.561048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.561079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 [2024-12-10 12:36:54.561320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.606 [2024-12-10 12:36:54.561354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.606 qpair failed and we were unable to recover it. 00:28:32.606 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.606 [2024-12-10 12:36:54.561529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.561562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.561780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:32.607 [2024-12-10 12:36:54.561813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.562062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.562094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.562286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.562325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.607 [2024-12-10 12:36:54.562443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.562476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.562734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.562773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.562898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.562930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.563190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.563223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.563503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.563535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.563651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.563683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.563860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.563894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.564096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.564128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.564321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.564360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.564547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.564578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.564748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.564780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.564899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.564931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.565103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.565135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.565275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.565309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.565568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.565600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.565704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.565737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.565850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.565889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.566060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.566093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.566207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.566241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.566355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.566387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.566556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.566589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.566756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.566788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.567077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.567110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.567259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.567292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.567463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.567495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.567759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.567791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.567968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.568002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.568193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.568228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.568415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.607 [2024-12-10 12:36:54.568447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.607 qpair failed and we were unable to recover it. 00:28:32.607 [2024-12-10 12:36:54.568615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.568646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.568918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.568951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.569165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.569198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.569370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.569403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.608 [2024-12-10 12:36:54.569517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.569549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.569671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.569703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:32.608 [2024-12-10 12:36:54.569984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.570017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.608 [2024-12-10 12:36:54.570213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.570251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.570376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.570411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.570605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.570642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.570852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.570885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.571053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.571086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.571267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.571301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.571489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.571522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.571715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.571755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.571925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.571957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.572140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.572184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.572390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.572423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.572624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.572656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.572852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.572885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.573147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.573191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.573392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.573424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.573539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.573571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.573818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.573850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.574019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.574052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.574268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.574302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.574605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.574638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.574810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.574842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.575096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.575129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1574be0 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.575459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.575509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d8000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.575641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.575680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.575875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.575911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.576056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.576090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.576280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.576314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.576517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.576551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.576746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.576786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.576981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.608 [2024-12-10 12:36:54.577015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.608 qpair failed and we were unable to recover it. 00:28:32.608 [2024-12-10 12:36:54.577184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.577220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.577414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.577447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.609 [2024-12-10 12:36:54.577623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.577657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.577851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.609 [2024-12-10 12:36:54.577884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.578078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.578113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.609 [2024-12-10 12:36:54.578322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.578357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.609 [2024-12-10 12:36:54.578550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.578582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.578829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.578863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.579037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.579072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.579261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.579301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.579577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.579609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.579827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.579858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.580056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.580093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.580344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.580380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.580649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.580681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.580883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:32.609 [2024-12-10 12:36:54.580915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f88d0000b90 with addr=10.0.0.2, port=4420 00:28:32.609 [2024-12-10 12:36:54.580925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.583393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.609 [2024-12-10 12:36:54.583508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.609 [2024-12-10 12:36:54.583555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.609 [2024-12-10 12:36:54.583587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.609 [2024-12-10 12:36:54.583608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.609 [2024-12-10 12:36:54.583664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.609 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:32.609 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.609 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:32.609 [2024-12-10 12:36:54.593315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.609 [2024-12-10 12:36:54.593401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.609 [2024-12-10 12:36:54.593434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.609 [2024-12-10 12:36:54.593452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.609 [2024-12-10 12:36:54.593474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.609 [2024-12-10 12:36:54.593511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.609 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 12:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1790461 00:28:32.609 [2024-12-10 12:36:54.603290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.609 [2024-12-10 12:36:54.603358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.609 [2024-12-10 12:36:54.603380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.609 [2024-12-10 12:36:54.603392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.609 [2024-12-10 12:36:54.603402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.609 [2024-12-10 12:36:54.603425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.609 qpair failed and we were unable to recover it. 00:28:32.609 [2024-12-10 12:36:54.613303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.609 [2024-12-10 12:36:54.613368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.609 [2024-12-10 12:36:54.613384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.613392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.613399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.613416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.623275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.623340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.623355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.623363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.623369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.623385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.633331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.633388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.633403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.633410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.633419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.633435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.643330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.643390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.643404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.643412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.643419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.643434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.653351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.653414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.653428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.653436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.653443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.653458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.663412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.663472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.663486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.663493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.663500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.663515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.673435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.673492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.673506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.673513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.673520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.673535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.683463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.683567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.683581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.683589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.683595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.683610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.693484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.693541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.693555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.693562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.693569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.693584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.703545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.703603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.703618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.703625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.703631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.703648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.713521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.713575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.713590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.713598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.713605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.713621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.723556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.723616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.723634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.723641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.723648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.723663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.610 [2024-12-10 12:36:54.733593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.610 [2024-12-10 12:36:54.733669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.610 [2024-12-10 12:36:54.733684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.610 [2024-12-10 12:36:54.733691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.610 [2024-12-10 12:36:54.733698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.610 [2024-12-10 12:36:54.733713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.610 qpair failed and we were unable to recover it. 00:28:32.870 [2024-12-10 12:36:54.743626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-12-10 12:36:54.743686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-12-10 12:36:54.743704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-12-10 12:36:54.743713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-12-10 12:36:54.743720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.870 [2024-12-10 12:36:54.743737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-12-10 12:36:54.753637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-12-10 12:36:54.753697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-12-10 12:36:54.753711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-12-10 12:36:54.753719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-12-10 12:36:54.753726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.870 [2024-12-10 12:36:54.753742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-12-10 12:36:54.763675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-12-10 12:36:54.763728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-12-10 12:36:54.763743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-12-10 12:36:54.763753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-12-10 12:36:54.763760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.870 [2024-12-10 12:36:54.763776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-12-10 12:36:54.773701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-12-10 12:36:54.773756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-12-10 12:36:54.773770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-12-10 12:36:54.773777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-12-10 12:36:54.773784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.870 [2024-12-10 12:36:54.773799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-12-10 12:36:54.783726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.870 [2024-12-10 12:36:54.783782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.870 [2024-12-10 12:36:54.783799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.870 [2024-12-10 12:36:54.783808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.870 [2024-12-10 12:36:54.783815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.870 [2024-12-10 12:36:54.783833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.870 qpair failed and we were unable to recover it. 00:28:32.870 [2024-12-10 12:36:54.793747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.793806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.793821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.793829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.793835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.793850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.803781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.803876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.803891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.803898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.803904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.803920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.813797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.813855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.813871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.813878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.813886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.813901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.823832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.823931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.823946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.823953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.823959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.823974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.833867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.833924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.833938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.833946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.833953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.833968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.843889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.843945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.843959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.843968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.843974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.843989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.853931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.853995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.854008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.854017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.854023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.854038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.863869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.863931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.863945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.863953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.863960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.863975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.873966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.874051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.874065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.874072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.874079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.874094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.884012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.884063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.884077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.884084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.884091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.884107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.894045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.894104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.894118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.894128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.894134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.894149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.904082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.904142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.904156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.904167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.904173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.904189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.914103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.914160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.871 [2024-12-10 12:36:54.914176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.871 [2024-12-10 12:36:54.914184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.871 [2024-12-10 12:36:54.914190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.871 [2024-12-10 12:36:54.914206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.871 qpair failed and we were unable to recover it. 00:28:32.871 [2024-12-10 12:36:54.924091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.871 [2024-12-10 12:36:54.924145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:54.924163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:54.924171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:54.924178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:54.924194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:54.934153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:54.934217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:54.934231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:54.934237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:54.934244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:54.934264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:54.944185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:54.944244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:54.944258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:54.944265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:54.944271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:54.944286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:54.954199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:54.954249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:54.954265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:54.954272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:54.954279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:54.954294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:54.964210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:54.964265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:54.964279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:54.964286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:54.964293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:54.964309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:54.974259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:54.974321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:54.974335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:54.974342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:54.974349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:54.974363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:54.984318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:54.984382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:54.984396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:54.984404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:54.984410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:54.984426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:54.994324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:54.994379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:54.994393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:54.994400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:54.994407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:54.994422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:55.004349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:55.004403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:55.004417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:55.004425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:55.004431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:55.004446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:55.014328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:55.014387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:55.014403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:55.014410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:55.014416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:55.014431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:55.024426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:55.024483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:55.024501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:55.024508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:55.024515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:55.024530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:32.872 [2024-12-10 12:36:55.034435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:32.872 [2024-12-10 12:36:55.034496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:32.872 [2024-12-10 12:36:55.034519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:32.872 [2024-12-10 12:36:55.034531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:32.872 [2024-12-10 12:36:55.034541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:32.872 [2024-12-10 12:36:55.034559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:32.872 qpair failed and we were unable to recover it. 00:28:33.131 [2024-12-10 12:36:55.044462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.131 [2024-12-10 12:36:55.044520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.131 [2024-12-10 12:36:55.044538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.131 [2024-12-10 12:36:55.044546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.131 [2024-12-10 12:36:55.044553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.131 [2024-12-10 12:36:55.044570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.131 qpair failed and we were unable to recover it. 00:28:33.131 [2024-12-10 12:36:55.054509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.131 [2024-12-10 12:36:55.054568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.131 [2024-12-10 12:36:55.054583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.131 [2024-12-10 12:36:55.054590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.131 [2024-12-10 12:36:55.054597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.131 [2024-12-10 12:36:55.054613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.131 qpair failed and we were unable to recover it. 00:28:33.131 [2024-12-10 12:36:55.064505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.064567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.064581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.064588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.064595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.064614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.074599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.074714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.074730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.074737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.074744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.074759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.084581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.084637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.084651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.084659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.084665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.084681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.094623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.094695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.094710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.094717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.094727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.094744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.104642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.104697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.104711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.104718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.104726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.104744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.114661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.114715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.114730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.114738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.114745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.114761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.124704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.124761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.124776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.124785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.124792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.124806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.134717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.134773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.134787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.134794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.134801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.134817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.144688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.144744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.144758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.144766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.144772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.144788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.154757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.154817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.154834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.154841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.154847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.154862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.164744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.164800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.164815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.164824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.164833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.164850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.174775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.174848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.174862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.174869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.174875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.174892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.184922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.132 [2024-12-10 12:36:55.184989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.132 [2024-12-10 12:36:55.185003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.132 [2024-12-10 12:36:55.185010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.132 [2024-12-10 12:36:55.185017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.132 [2024-12-10 12:36:55.185032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.132 qpair failed and we were unable to recover it. 00:28:33.132 [2024-12-10 12:36:55.194953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.195045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.195059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.195066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.195076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.195093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-12-10 12:36:55.204855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.204911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.204928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.204935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.204942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.204958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-12-10 12:36:55.214950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.215007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.215022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.215029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.215036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.215052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-12-10 12:36:55.224903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.224971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.224986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.224993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.224999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.225015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-12-10 12:36:55.234994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.235053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.235067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.235075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.235081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.235096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-12-10 12:36:55.244963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.245017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.245031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.245038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.245045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.245060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-12-10 12:36:55.255063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.255121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.255135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.255142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.255149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.255168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-12-10 12:36:55.265021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.265076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.265090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.265097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.265103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.265119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-12-10 12:36:55.275120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.275216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.275231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.275238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.275245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.275260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-12-10 12:36:55.285140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.285198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.285214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.285222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.285228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.285243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.133 [2024-12-10 12:36:55.295100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.133 [2024-12-10 12:36:55.295193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.133 [2024-12-10 12:36:55.295211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.133 [2024-12-10 12:36:55.295219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.133 [2024-12-10 12:36:55.295226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.133 [2024-12-10 12:36:55.295243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.133 qpair failed and we were unable to recover it. 00:28:33.392 [2024-12-10 12:36:55.305239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.392 [2024-12-10 12:36:55.305301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.392 [2024-12-10 12:36:55.305320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.392 [2024-12-10 12:36:55.305328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.392 [2024-12-10 12:36:55.305335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.392 [2024-12-10 12:36:55.305354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.392 qpair failed and we were unable to recover it. 00:28:33.392 [2024-12-10 12:36:55.315231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.392 [2024-12-10 12:36:55.315299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.392 [2024-12-10 12:36:55.315315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.392 [2024-12-10 12:36:55.315322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.392 [2024-12-10 12:36:55.315329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.392 [2024-12-10 12:36:55.315345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.392 qpair failed and we were unable to recover it. 00:28:33.392 [2024-12-10 12:36:55.325177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.392 [2024-12-10 12:36:55.325232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.392 [2024-12-10 12:36:55.325247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.392 [2024-12-10 12:36:55.325260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.392 [2024-12-10 12:36:55.325267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.392 [2024-12-10 12:36:55.325283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.392 qpair failed and we were unable to recover it. 00:28:33.392 [2024-12-10 12:36:55.335229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.392 [2024-12-10 12:36:55.335286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.392 [2024-12-10 12:36:55.335301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.392 [2024-12-10 12:36:55.335308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.392 [2024-12-10 12:36:55.335315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.392 [2024-12-10 12:36:55.335330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.392 qpair failed and we were unable to recover it. 00:28:33.392 [2024-12-10 12:36:55.345238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.392 [2024-12-10 12:36:55.345299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.392 [2024-12-10 12:36:55.345313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.392 [2024-12-10 12:36:55.345321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.392 [2024-12-10 12:36:55.345328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.392 [2024-12-10 12:36:55.345344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.392 qpair failed and we were unable to recover it. 00:28:33.392 [2024-12-10 12:36:55.355268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.392 [2024-12-10 12:36:55.355325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.392 [2024-12-10 12:36:55.355338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.392 [2024-12-10 12:36:55.355345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.392 [2024-12-10 12:36:55.355352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.392 [2024-12-10 12:36:55.355367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.392 qpair failed and we were unable to recover it. 00:28:33.392 [2024-12-10 12:36:55.365361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.392 [2024-12-10 12:36:55.365434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.392 [2024-12-10 12:36:55.365448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.392 [2024-12-10 12:36:55.365455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.392 [2024-12-10 12:36:55.365462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.392 [2024-12-10 12:36:55.365476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.392 qpair failed and we were unable to recover it. 00:28:33.392 [2024-12-10 12:36:55.375435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.392 [2024-12-10 12:36:55.375520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.392 [2024-12-10 12:36:55.375533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.392 [2024-12-10 12:36:55.375540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.392 [2024-12-10 12:36:55.375547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.392 [2024-12-10 12:36:55.375562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.392 qpair failed and we were unable to recover it. 00:28:33.392 [2024-12-10 12:36:55.385398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.385452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.385466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.385473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.385480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.385495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.395444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.395502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.395516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.395523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.395530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.395545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.405449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.405506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.405520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.405527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.405534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.405549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.415443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.415505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.415519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.415526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.415533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.415548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.425454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.425512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.425526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.425534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.425541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.425557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.435474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.435538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.435551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.435559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.435565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.435580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.445583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.445631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.445645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.445653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.445659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.445674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.455602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.455658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.455674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.455685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.455692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.455708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.465630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.465686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.465701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.465708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.465714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.465729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.475590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.475649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.475664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.475671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.475677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.475693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.485714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.485803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.485817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.485824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.485830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.485846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.495727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.495809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.495822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.495830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.495836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.495857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.505749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.505853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.505891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.393 [2024-12-10 12:36:55.505899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.393 [2024-12-10 12:36:55.505906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.393 [2024-12-10 12:36:55.505935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.393 qpair failed and we were unable to recover it. 00:28:33.393 [2024-12-10 12:36:55.515773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.393 [2024-12-10 12:36:55.515830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.393 [2024-12-10 12:36:55.515846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.394 [2024-12-10 12:36:55.515854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.394 [2024-12-10 12:36:55.515860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.394 [2024-12-10 12:36:55.515876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.394 qpair failed and we were unable to recover it. 00:28:33.394 [2024-12-10 12:36:55.525801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.394 [2024-12-10 12:36:55.525857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.394 [2024-12-10 12:36:55.525871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.394 [2024-12-10 12:36:55.525880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.394 [2024-12-10 12:36:55.525886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.394 [2024-12-10 12:36:55.525902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.394 qpair failed and we were unable to recover it. 00:28:33.394 [2024-12-10 12:36:55.535845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.394 [2024-12-10 12:36:55.535911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.394 [2024-12-10 12:36:55.535926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.394 [2024-12-10 12:36:55.535933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.394 [2024-12-10 12:36:55.535940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.394 [2024-12-10 12:36:55.535955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.394 qpair failed and we were unable to recover it. 00:28:33.394 [2024-12-10 12:36:55.545880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.394 [2024-12-10 12:36:55.545935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.394 [2024-12-10 12:36:55.545950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.394 [2024-12-10 12:36:55.545957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.394 [2024-12-10 12:36:55.545964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.394 [2024-12-10 12:36:55.545979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.394 qpair failed and we were unable to recover it. 00:28:33.394 [2024-12-10 12:36:55.555897] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.394 [2024-12-10 12:36:55.555956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.394 [2024-12-10 12:36:55.555973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.394 [2024-12-10 12:36:55.555981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.394 [2024-12-10 12:36:55.555988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.394 [2024-12-10 12:36:55.556004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.394 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.565927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.565984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.654 [2024-12-10 12:36:55.566001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.654 [2024-12-10 12:36:55.566009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.654 [2024-12-10 12:36:55.566016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.654 [2024-12-10 12:36:55.566033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.575895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.575952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.654 [2024-12-10 12:36:55.575966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.654 [2024-12-10 12:36:55.575973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.654 [2024-12-10 12:36:55.575980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.654 [2024-12-10 12:36:55.575996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.585986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.586042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.654 [2024-12-10 12:36:55.586059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.654 [2024-12-10 12:36:55.586067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.654 [2024-12-10 12:36:55.586073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.654 [2024-12-10 12:36:55.586089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.596065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.596172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.654 [2024-12-10 12:36:55.596186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.654 [2024-12-10 12:36:55.596193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.654 [2024-12-10 12:36:55.596200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.654 [2024-12-10 12:36:55.596216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.606099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.606153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.654 [2024-12-10 12:36:55.606172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.654 [2024-12-10 12:36:55.606180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.654 [2024-12-10 12:36:55.606186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.654 [2024-12-10 12:36:55.606202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.616076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.616131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.654 [2024-12-10 12:36:55.616146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.654 [2024-12-10 12:36:55.616153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.654 [2024-12-10 12:36:55.616162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.654 [2024-12-10 12:36:55.616177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.626141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.626249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.654 [2024-12-10 12:36:55.626264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.654 [2024-12-10 12:36:55.626272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.654 [2024-12-10 12:36:55.626281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.654 [2024-12-10 12:36:55.626297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.636116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.636182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.654 [2024-12-10 12:36:55.636197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.654 [2024-12-10 12:36:55.636204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.654 [2024-12-10 12:36:55.636210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.654 [2024-12-10 12:36:55.636226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.646150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.646207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.654 [2024-12-10 12:36:55.646221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.654 [2024-12-10 12:36:55.646228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.654 [2024-12-10 12:36:55.646235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.654 [2024-12-10 12:36:55.646250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.656193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.656253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.654 [2024-12-10 12:36:55.656268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.654 [2024-12-10 12:36:55.656275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.654 [2024-12-10 12:36:55.656281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.654 [2024-12-10 12:36:55.656297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.654 qpair failed and we were unable to recover it. 00:28:33.654 [2024-12-10 12:36:55.666220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.654 [2024-12-10 12:36:55.666276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.666290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.666297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.666303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.666318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.676265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.676319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.676333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.676340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.676346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.676362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.686310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.686415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.686429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.686436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.686442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.686458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.696339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.696416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.696430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.696437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.696443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.696459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.706332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.706390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.706405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.706413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.706420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.706435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.716335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.716400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.716418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.716425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.716431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.716447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.726388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.726454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.726468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.726476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.726483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.726498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.736434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.736488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.736502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.736509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.736516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.736531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.746473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.746530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.746544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.746551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.746557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.746572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.756483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.756538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.756551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.756558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.756569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.756584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.766509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.766567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.766580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.766589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.766595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.766610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.776547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.776605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.776619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.776626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.776633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.776648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.786575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.786630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.786644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.786651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.786657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.655 [2024-12-10 12:36:55.786673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.655 qpair failed and we were unable to recover it. 00:28:33.655 [2024-12-10 12:36:55.796597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.655 [2024-12-10 12:36:55.796651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.655 [2024-12-10 12:36:55.796664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.655 [2024-12-10 12:36:55.796671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.655 [2024-12-10 12:36:55.796678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.656 [2024-12-10 12:36:55.796693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-12-10 12:36:55.806672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.656 [2024-12-10 12:36:55.806732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.656 [2024-12-10 12:36:55.806745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.656 [2024-12-10 12:36:55.806753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.656 [2024-12-10 12:36:55.806759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.656 [2024-12-10 12:36:55.806774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.656 [2024-12-10 12:36:55.816667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.656 [2024-12-10 12:36:55.816727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.656 [2024-12-10 12:36:55.816745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.656 [2024-12-10 12:36:55.816753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.656 [2024-12-10 12:36:55.816760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.656 [2024-12-10 12:36:55.816777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.656 qpair failed and we were unable to recover it. 00:28:33.915 [2024-12-10 12:36:55.826723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.915 [2024-12-10 12:36:55.826792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.915 [2024-12-10 12:36:55.826810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.915 [2024-12-10 12:36:55.826818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.915 [2024-12-10 12:36:55.826824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.915 [2024-12-10 12:36:55.826842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.915 qpair failed and we were unable to recover it. 00:28:33.915 [2024-12-10 12:36:55.836712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.915 [2024-12-10 12:36:55.836768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.915 [2024-12-10 12:36:55.836782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.915 [2024-12-10 12:36:55.836790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.915 [2024-12-10 12:36:55.836797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.915 [2024-12-10 12:36:55.836812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.915 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.846743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.846799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.846818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.846826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.846832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.846848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.856705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.856761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.856775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.856782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.856789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.856804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.866804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.866860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.866874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.866881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.866887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.866903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.876828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.876883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.876897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.876904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.876910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.876926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.886850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.886901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.886915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.886925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.886932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.886948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.896890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.896956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.896970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.896977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.896984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.896998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.906912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.906970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.906984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.906992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.906998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.907014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.916871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.916937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.916951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.916959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.916966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.916981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.926972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.927037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.927051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.927058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.927064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.927080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.937002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.937069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.937084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.937091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.937097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.937112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.947062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.947117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.947130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.947138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.947145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.947163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.957057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.957113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.957126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.957133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.957140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.957155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.967072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.967123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.916 [2024-12-10 12:36:55.967137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.916 [2024-12-10 12:36:55.967144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.916 [2024-12-10 12:36:55.967150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.916 [2024-12-10 12:36:55.967169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.916 qpair failed and we were unable to recover it. 00:28:33.916 [2024-12-10 12:36:55.977123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.916 [2024-12-10 12:36:55.977196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:55.977212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:55.977220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:55.977226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:55.977241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:33.917 [2024-12-10 12:36:55.987144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.917 [2024-12-10 12:36:55.987205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:55.987219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:55.987226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:55.987233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:55.987248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:33.917 [2024-12-10 12:36:55.997174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.917 [2024-12-10 12:36:55.997225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:55.997238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:55.997245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:55.997252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:55.997267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:33.917 [2024-12-10 12:36:56.007200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.917 [2024-12-10 12:36:56.007263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:56.007276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:56.007283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:56.007290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:56.007304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:33.917 [2024-12-10 12:36:56.017253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.917 [2024-12-10 12:36:56.017331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:56.017346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:56.017356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:56.017362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:56.017378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:33.917 [2024-12-10 12:36:56.027270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.917 [2024-12-10 12:36:56.027339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:56.027353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:56.027361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:56.027368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:56.027384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:33.917 [2024-12-10 12:36:56.037306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.917 [2024-12-10 12:36:56.037361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:56.037376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:56.037383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:56.037390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:56.037404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:33.917 [2024-12-10 12:36:56.047319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.917 [2024-12-10 12:36:56.047368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:56.047382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:56.047389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:56.047396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:56.047412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:33.917 [2024-12-10 12:36:56.057356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.917 [2024-12-10 12:36:56.057461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:56.057474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:56.057482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:56.057488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:56.057506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:33.917 [2024-12-10 12:36:56.067378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.917 [2024-12-10 12:36:56.067438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:56.067452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:56.067460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:56.067466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:56.067482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:33.917 [2024-12-10 12:36:56.077410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:33.917 [2024-12-10 12:36:56.077462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:33.917 [2024-12-10 12:36:56.077479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:33.917 [2024-12-10 12:36:56.077486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:33.917 [2024-12-10 12:36:56.077493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:33.917 [2024-12-10 12:36:56.077510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:33.917 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.087448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.087507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.087524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.087533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.087539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.087556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.097476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.097543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.097557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.097565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.097571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.097587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.107499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.107565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.107579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.107587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.107594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.107610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.117520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.117583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.117598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.117606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.117612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.117628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.127578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.127668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.127683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.127690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.127696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.127712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.137579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.137633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.137646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.137653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.137660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.137676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.147604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.147662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.147679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.147686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.147693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.147708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.157615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.157668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.157682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.157689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.157696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.157711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.167652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.167708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.167722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.167729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.167736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.167751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.177694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.177760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.177773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.177781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.177787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.177801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.177 [2024-12-10 12:36:56.187753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.177 [2024-12-10 12:36:56.187826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.177 [2024-12-10 12:36:56.187841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.177 [2024-12-10 12:36:56.187848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.177 [2024-12-10 12:36:56.187857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.177 [2024-12-10 12:36:56.187872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.177 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.197730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.197782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.197796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.197803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.197809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.197825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.207778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.207831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.207845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.207852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.207858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.207873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.217815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.217884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.217898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.217906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.217913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.217927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.227828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.227890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.227904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.227912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.227918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.227934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.237848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.237902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.237916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.237923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.237930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.237945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.247883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.247936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.247950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.247957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.247963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.247978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.257932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.257990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.258004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.258012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.258019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.258033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.267950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.268010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.268023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.268031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.268038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.268053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.277969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.278026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.278043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.278051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.278058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.278073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.288028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.288090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.288104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.288111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.288118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.288134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.298049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.298119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.298132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.298140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.298146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.298165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.308050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.308107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.308121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.308128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.308135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.308150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.318080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.318134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.318150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.178 [2024-12-10 12:36:56.318161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.178 [2024-12-10 12:36:56.318171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.178 [2024-12-10 12:36:56.318187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.178 qpair failed and we were unable to recover it. 00:28:34.178 [2024-12-10 12:36:56.328108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.178 [2024-12-10 12:36:56.328167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.178 [2024-12-10 12:36:56.328182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.179 [2024-12-10 12:36:56.328190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.179 [2024-12-10 12:36:56.328196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.179 [2024-12-10 12:36:56.328212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.179 qpair failed and we were unable to recover it. 00:28:34.179 [2024-12-10 12:36:56.338139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.179 [2024-12-10 12:36:56.338202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.179 [2024-12-10 12:36:56.338219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.179 [2024-12-10 12:36:56.338227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.179 [2024-12-10 12:36:56.338234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.179 [2024-12-10 12:36:56.338251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.179 qpair failed and we were unable to recover it. 00:28:34.438 [2024-12-10 12:36:56.348188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.438 [2024-12-10 12:36:56.348248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.438 [2024-12-10 12:36:56.348266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.438 [2024-12-10 12:36:56.348274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.438 [2024-12-10 12:36:56.348281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.438 [2024-12-10 12:36:56.348298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-12-10 12:36:56.358201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.438 [2024-12-10 12:36:56.358260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.438 [2024-12-10 12:36:56.358274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.438 [2024-12-10 12:36:56.358281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.438 [2024-12-10 12:36:56.358288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.438 [2024-12-10 12:36:56.358303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-12-10 12:36:56.368206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.438 [2024-12-10 12:36:56.368261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.438 [2024-12-10 12:36:56.368276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.438 [2024-12-10 12:36:56.368283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.438 [2024-12-10 12:36:56.368289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.438 [2024-12-10 12:36:56.368305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-12-10 12:36:56.378270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.438 [2024-12-10 12:36:56.378372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.438 [2024-12-10 12:36:56.378387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.438 [2024-12-10 12:36:56.378394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.438 [2024-12-10 12:36:56.378401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.438 [2024-12-10 12:36:56.378417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-12-10 12:36:56.388277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.438 [2024-12-10 12:36:56.388334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.438 [2024-12-10 12:36:56.388349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.438 [2024-12-10 12:36:56.388356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.438 [2024-12-10 12:36:56.388362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.438 [2024-12-10 12:36:56.388378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-12-10 12:36:56.398317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.438 [2024-12-10 12:36:56.398378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.438 [2024-12-10 12:36:56.398392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.438 [2024-12-10 12:36:56.398400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.438 [2024-12-10 12:36:56.398407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.438 [2024-12-10 12:36:56.398422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-12-10 12:36:56.408314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.438 [2024-12-10 12:36:56.408366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.438 [2024-12-10 12:36:56.408383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.438 [2024-12-10 12:36:56.408390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.438 [2024-12-10 12:36:56.408397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.438 [2024-12-10 12:36:56.408413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-12-10 12:36:56.418384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.438 [2024-12-10 12:36:56.418459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.438 [2024-12-10 12:36:56.418474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.438 [2024-12-10 12:36:56.418482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.438 [2024-12-10 12:36:56.418488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.438 [2024-12-10 12:36:56.418504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-12-10 12:36:56.428401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.438 [2024-12-10 12:36:56.428461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.438 [2024-12-10 12:36:56.428476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.438 [2024-12-10 12:36:56.428483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.438 [2024-12-10 12:36:56.428489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.438 [2024-12-10 12:36:56.428505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.438 [2024-12-10 12:36:56.438407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.438 [2024-12-10 12:36:56.438460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.438 [2024-12-10 12:36:56.438474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.438 [2024-12-10 12:36:56.438482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.438 [2024-12-10 12:36:56.438489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.438 [2024-12-10 12:36:56.438504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.438 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.448464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.448519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.448534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.448544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.448550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.448565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.458482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.458540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.458554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.458561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.458568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.458583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.468508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.468575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.468589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.468597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.468603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.468619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.478538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.478596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.478609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.478616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.478623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.478638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.488568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.488623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.488637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.488645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.488651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.488670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.498606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.498659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.498673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.498680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.498686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.498701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.508624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.508678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.508692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.508699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.508706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.508722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.518650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.518702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.518718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.518725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.518732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.518747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.528687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.528747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.528763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.528770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.528777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.528792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.538712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.538778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.538795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.538802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.538808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.538823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.548686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.548754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.548769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.548776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.548784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.548800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.558764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.558820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.558834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.558841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.558848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.558865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.568790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.568846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.439 [2024-12-10 12:36:56.568859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.439 [2024-12-10 12:36:56.568866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.439 [2024-12-10 12:36:56.568872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.439 [2024-12-10 12:36:56.568887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.439 qpair failed and we were unable to recover it. 00:28:34.439 [2024-12-10 12:36:56.578765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.439 [2024-12-10 12:36:56.578824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.440 [2024-12-10 12:36:56.578838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.440 [2024-12-10 12:36:56.578850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.440 [2024-12-10 12:36:56.578857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.440 [2024-12-10 12:36:56.578872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-12-10 12:36:56.588783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.440 [2024-12-10 12:36:56.588840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.440 [2024-12-10 12:36:56.588854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.440 [2024-12-10 12:36:56.588861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.440 [2024-12-10 12:36:56.588867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.440 [2024-12-10 12:36:56.588883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.440 [2024-12-10 12:36:56.598902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.440 [2024-12-10 12:36:56.599000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.440 [2024-12-10 12:36:56.599018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.440 [2024-12-10 12:36:56.599026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.440 [2024-12-10 12:36:56.599032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.440 [2024-12-10 12:36:56.599049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.440 qpair failed and we were unable to recover it. 00:28:34.699 [2024-12-10 12:36:56.608927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.699 [2024-12-10 12:36:56.608985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.699 [2024-12-10 12:36:56.609004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.699 [2024-12-10 12:36:56.609013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.699 [2024-12-10 12:36:56.609020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.699 [2024-12-10 12:36:56.609037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-12-10 12:36:56.618875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.699 [2024-12-10 12:36:56.618931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.699 [2024-12-10 12:36:56.618946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.699 [2024-12-10 12:36:56.618953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.699 [2024-12-10 12:36:56.618960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.699 [2024-12-10 12:36:56.618979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-12-10 12:36:56.628891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.699 [2024-12-10 12:36:56.628945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.699 [2024-12-10 12:36:56.628959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.699 [2024-12-10 12:36:56.628966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.699 [2024-12-10 12:36:56.628974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.699 [2024-12-10 12:36:56.628990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-12-10 12:36:56.638998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.699 [2024-12-10 12:36:56.639058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.699 [2024-12-10 12:36:56.639072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.699 [2024-12-10 12:36:56.639080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.699 [2024-12-10 12:36:56.639087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.699 [2024-12-10 12:36:56.639101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.699 qpair failed and we were unable to recover it. 00:28:34.699 [2024-12-10 12:36:56.648983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.699 [2024-12-10 12:36:56.649035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.649048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.649056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.649062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.649078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.659056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.659116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.659129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.659136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.659143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.659162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.669087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.669141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.669155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.669166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.669173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.669188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.679109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.679164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.679179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.679185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.679192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.679207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.689146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.689211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.689225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.689232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.689239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.689253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.699195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.699257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.699270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.699277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.699284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.699299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.709239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.709302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.709318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.709326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.709333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.709349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.719263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.719322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.719337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.719344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.719351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.719366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.729288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.729363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.729377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.729384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.729391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.729406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.739254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.739313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.739326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.739333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.739340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.739355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.749344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.749399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.749413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.749420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.749431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.749446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.759348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.759408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.759422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.759429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.759436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.759452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.769384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.769437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.769452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.769459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.700 [2024-12-10 12:36:56.769465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.700 [2024-12-10 12:36:56.769480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.700 qpair failed and we were unable to recover it. 00:28:34.700 [2024-12-10 12:36:56.779408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.700 [2024-12-10 12:36:56.779486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.700 [2024-12-10 12:36:56.779500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.700 [2024-12-10 12:36:56.779507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.701 [2024-12-10 12:36:56.779513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.701 [2024-12-10 12:36:56.779529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-12-10 12:36:56.789431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.701 [2024-12-10 12:36:56.789489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.701 [2024-12-10 12:36:56.789504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.701 [2024-12-10 12:36:56.789511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.701 [2024-12-10 12:36:56.789517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.701 [2024-12-10 12:36:56.789533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-12-10 12:36:56.799448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.701 [2024-12-10 12:36:56.799506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.701 [2024-12-10 12:36:56.799520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.701 [2024-12-10 12:36:56.799527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.701 [2024-12-10 12:36:56.799534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.701 [2024-12-10 12:36:56.799549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-12-10 12:36:56.809499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.701 [2024-12-10 12:36:56.809551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.701 [2024-12-10 12:36:56.809565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.701 [2024-12-10 12:36:56.809572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.701 [2024-12-10 12:36:56.809579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.701 [2024-12-10 12:36:56.809594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-12-10 12:36:56.819534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.701 [2024-12-10 12:36:56.819594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.701 [2024-12-10 12:36:56.819609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.701 [2024-12-10 12:36:56.819616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.701 [2024-12-10 12:36:56.819623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.701 [2024-12-10 12:36:56.819639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-12-10 12:36:56.829494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.701 [2024-12-10 12:36:56.829554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.701 [2024-12-10 12:36:56.829569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.701 [2024-12-10 12:36:56.829576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.701 [2024-12-10 12:36:56.829582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.701 [2024-12-10 12:36:56.829598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-12-10 12:36:56.839519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.701 [2024-12-10 12:36:56.839573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.701 [2024-12-10 12:36:56.839590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.701 [2024-12-10 12:36:56.839597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.701 [2024-12-10 12:36:56.839603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.701 [2024-12-10 12:36:56.839619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-12-10 12:36:56.849578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.701 [2024-12-10 12:36:56.849651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.701 [2024-12-10 12:36:56.849667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.701 [2024-12-10 12:36:56.849675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.701 [2024-12-10 12:36:56.849681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.701 [2024-12-10 12:36:56.849697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.701 [2024-12-10 12:36:56.859644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.701 [2024-12-10 12:36:56.859719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.701 [2024-12-10 12:36:56.859740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.701 [2024-12-10 12:36:56.859752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.701 [2024-12-10 12:36:56.859761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.701 [2024-12-10 12:36:56.859784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.701 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.869612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.869700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.869717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.869725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.869732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.869749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.879623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.879682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.879696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.879704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.879714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.879729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.889718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.889772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.889786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.889793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.889800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.889817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.899683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.899767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.899781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.899789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.899795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.899811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.909776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.909830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.909844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.909852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.909858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.909873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.919759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.919810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.919825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.919832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.919838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.919854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.929889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.929945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.929960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.929967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.929974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.929989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.939910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.939967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.939981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.939989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.939997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.940012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.949901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.949977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.949990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.949998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.950004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.950020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.959920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.959974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.959988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.959995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.960001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.960016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.969944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.970000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.970017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.970025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.970031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.970047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.979918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.979972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.979986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.979993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.961 [2024-12-10 12:36:56.979999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.961 [2024-12-10 12:36:56.980015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.961 qpair failed and we were unable to recover it. 00:28:34.961 [2024-12-10 12:36:56.990020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.961 [2024-12-10 12:36:56.990077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.961 [2024-12-10 12:36:56.990092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.961 [2024-12-10 12:36:56.990099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:56.990105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:56.990121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.000038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.000091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.000106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.000113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.000119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.000134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.010070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.010128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.010141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.010151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.010163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.010179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.020116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.020191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.020206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.020214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.020220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.020236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.030127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.030187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.030201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.030209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.030215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.030231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.040197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.040254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.040267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.040275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.040282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.040297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.050203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.050256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.050270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.050277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.050284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.050302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.060245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.060320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.060334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.060341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.060347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.060362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.070261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.070323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.070337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.070344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.070350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.070366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.080254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.080314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.080327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.080334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.080341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.080355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.090324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.090385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.090398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.090406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.090412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.090428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.100345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.100406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.100420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.100427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.100434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.100449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.110376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.110435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.962 [2024-12-10 12:36:57.110449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.962 [2024-12-10 12:36:57.110457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.962 [2024-12-10 12:36:57.110463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.962 [2024-12-10 12:36:57.110478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.962 qpair failed and we were unable to recover it. 00:28:34.962 [2024-12-10 12:36:57.120394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:34.962 [2024-12-10 12:36:57.120453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:34.963 [2024-12-10 12:36:57.120468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:34.963 [2024-12-10 12:36:57.120475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:34.963 [2024-12-10 12:36:57.120482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:34.963 [2024-12-10 12:36:57.120497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:34.963 qpair failed and we were unable to recover it. 00:28:35.222 [2024-12-10 12:36:57.130366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.222 [2024-12-10 12:36:57.130421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.222 [2024-12-10 12:36:57.130439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.222 [2024-12-10 12:36:57.130447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.222 [2024-12-10 12:36:57.130454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.222 [2024-12-10 12:36:57.130471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.222 qpair failed and we were unable to recover it. 00:28:35.222 [2024-12-10 12:36:57.140476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.222 [2024-12-10 12:36:57.140532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.222 [2024-12-10 12:36:57.140548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.222 [2024-12-10 12:36:57.140558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.222 [2024-12-10 12:36:57.140565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.222 [2024-12-10 12:36:57.140580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.222 qpair failed and we were unable to recover it. 00:28:35.222 [2024-12-10 12:36:57.150492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.222 [2024-12-10 12:36:57.150547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.222 [2024-12-10 12:36:57.150563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.222 [2024-12-10 12:36:57.150570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.222 [2024-12-10 12:36:57.150577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.222 [2024-12-10 12:36:57.150592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.222 qpair failed and we were unable to recover it. 00:28:35.222 [2024-12-10 12:36:57.160486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.222 [2024-12-10 12:36:57.160548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.222 [2024-12-10 12:36:57.160563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.222 [2024-12-10 12:36:57.160570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.222 [2024-12-10 12:36:57.160577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.222 [2024-12-10 12:36:57.160592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.222 qpair failed and we were unable to recover it. 00:28:35.222 [2024-12-10 12:36:57.170541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.222 [2024-12-10 12:36:57.170608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.222 [2024-12-10 12:36:57.170623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.222 [2024-12-10 12:36:57.170630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.222 [2024-12-10 12:36:57.170636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.222 [2024-12-10 12:36:57.170651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.222 qpair failed and we were unable to recover it. 00:28:35.222 [2024-12-10 12:36:57.180619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.222 [2024-12-10 12:36:57.180676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.222 [2024-12-10 12:36:57.180690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.222 [2024-12-10 12:36:57.180697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.222 [2024-12-10 12:36:57.180704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.180722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.190615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.190717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.190731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.190738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.190744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.190759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.200638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.200696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.200709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.200716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.200723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.200738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.210693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.210754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.210768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.210775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.210783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.210797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.220695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.220753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.220767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.220775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.220782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.220797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.230717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.230774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.230788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.230795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.230802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.230817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.240725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.240789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.240803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.240811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.240817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.240832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.250773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.250825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.250838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.250846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.250852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.250868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.260796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.260854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.260867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.260875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.260881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.260896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.270770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.270828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.270847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.270855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.270861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.270876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.280863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.280918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.280931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.280938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.280945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.280960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.290895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.290946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.290959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.290967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.290973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.290989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.300914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.300969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.300982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.300989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.300997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.301013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.310953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.223 [2024-12-10 12:36:57.311009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.223 [2024-12-10 12:36:57.311022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.223 [2024-12-10 12:36:57.311029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.223 [2024-12-10 12:36:57.311039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.223 [2024-12-10 12:36:57.311054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.223 qpair failed and we were unable to recover it. 00:28:35.223 [2024-12-10 12:36:57.320967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.224 [2024-12-10 12:36:57.321019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.224 [2024-12-10 12:36:57.321034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.224 [2024-12-10 12:36:57.321041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.224 [2024-12-10 12:36:57.321048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.224 [2024-12-10 12:36:57.321063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.224 qpair failed and we were unable to recover it. 00:28:35.224 [2024-12-10 12:36:57.331014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.224 [2024-12-10 12:36:57.331066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.224 [2024-12-10 12:36:57.331081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.224 [2024-12-10 12:36:57.331088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.224 [2024-12-10 12:36:57.331095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.224 [2024-12-10 12:36:57.331111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.224 qpair failed and we were unable to recover it. 00:28:35.224 [2024-12-10 12:36:57.341034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.224 [2024-12-10 12:36:57.341101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.224 [2024-12-10 12:36:57.341115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.224 [2024-12-10 12:36:57.341123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.224 [2024-12-10 12:36:57.341129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.224 [2024-12-10 12:36:57.341145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.224 qpair failed and we were unable to recover it. 00:28:35.224 [2024-12-10 12:36:57.351106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.224 [2024-12-10 12:36:57.351166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.224 [2024-12-10 12:36:57.351180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.224 [2024-12-10 12:36:57.351187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.224 [2024-12-10 12:36:57.351193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.224 [2024-12-10 12:36:57.351208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.224 qpair failed and we were unable to recover it. 00:28:35.224 [2024-12-10 12:36:57.361092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.224 [2024-12-10 12:36:57.361145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.224 [2024-12-10 12:36:57.361164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.224 [2024-12-10 12:36:57.361172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.224 [2024-12-10 12:36:57.361178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.224 [2024-12-10 12:36:57.361193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.224 qpair failed and we were unable to recover it. 00:28:35.224 [2024-12-10 12:36:57.371170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.224 [2024-12-10 12:36:57.371223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.224 [2024-12-10 12:36:57.371237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.224 [2024-12-10 12:36:57.371244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.224 [2024-12-10 12:36:57.371251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.224 [2024-12-10 12:36:57.371266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.224 qpair failed and we were unable to recover it. 00:28:35.224 [2024-12-10 12:36:57.381171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.224 [2024-12-10 12:36:57.381244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.224 [2024-12-10 12:36:57.381258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.224 [2024-12-10 12:36:57.381265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.224 [2024-12-10 12:36:57.381271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.224 [2024-12-10 12:36:57.381286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.224 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.391233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.391294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.484 [2024-12-10 12:36:57.391313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.484 [2024-12-10 12:36:57.391321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.484 [2024-12-10 12:36:57.391327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.484 [2024-12-10 12:36:57.391345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.484 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.401248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.401317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.484 [2024-12-10 12:36:57.401337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.484 [2024-12-10 12:36:57.401346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.484 [2024-12-10 12:36:57.401353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.484 [2024-12-10 12:36:57.401369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.484 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.411275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.411330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.484 [2024-12-10 12:36:57.411344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.484 [2024-12-10 12:36:57.411353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.484 [2024-12-10 12:36:57.411360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.484 [2024-12-10 12:36:57.411375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.484 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.421274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.421332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.484 [2024-12-10 12:36:57.421347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.484 [2024-12-10 12:36:57.421355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.484 [2024-12-10 12:36:57.421361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.484 [2024-12-10 12:36:57.421377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.484 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.431290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.431387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.484 [2024-12-10 12:36:57.431402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.484 [2024-12-10 12:36:57.431409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.484 [2024-12-10 12:36:57.431415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.484 [2024-12-10 12:36:57.431430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.484 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.441285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.441342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.484 [2024-12-10 12:36:57.441355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.484 [2024-12-10 12:36:57.441362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.484 [2024-12-10 12:36:57.441372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.484 [2024-12-10 12:36:57.441387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.484 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.451341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.451432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.484 [2024-12-10 12:36:57.451446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.484 [2024-12-10 12:36:57.451454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.484 [2024-12-10 12:36:57.451460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.484 [2024-12-10 12:36:57.451475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.484 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.461384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.461442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.484 [2024-12-10 12:36:57.461457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.484 [2024-12-10 12:36:57.461464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.484 [2024-12-10 12:36:57.461471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.484 [2024-12-10 12:36:57.461486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.484 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.471404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.471461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.484 [2024-12-10 12:36:57.471475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.484 [2024-12-10 12:36:57.471483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.484 [2024-12-10 12:36:57.471490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.484 [2024-12-10 12:36:57.471506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.484 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.481459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.481639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.484 [2024-12-10 12:36:57.481655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.484 [2024-12-10 12:36:57.481662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.484 [2024-12-10 12:36:57.481669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.484 [2024-12-10 12:36:57.481686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.484 qpair failed and we were unable to recover it. 00:28:35.484 [2024-12-10 12:36:57.491383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.484 [2024-12-10 12:36:57.491455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.491469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.491477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.491483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.491500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.501458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.501537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.501551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.501558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.501564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.501579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.511513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.511565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.511578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.511586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.511593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.511608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.521526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.521582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.521597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.521604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.521611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.521626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.531557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.531632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.531650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.531657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.531664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.531679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.541602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.541662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.541676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.541683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.541689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.541704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.551629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.551688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.551711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.551719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.551725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.551746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.561665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.561720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.561734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.561741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.561748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.561763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.571695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.571752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.571766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.571777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.571783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.571799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.581730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.581789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.581802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.581809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.581816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.581831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.591755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.591835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.591850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.591858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.591864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.591880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.601823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.601880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.601894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.601902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.601908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.601923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.611801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.611858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.611871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.611880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.485 [2024-12-10 12:36:57.611886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.485 [2024-12-10 12:36:57.611905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.485 qpair failed and we were unable to recover it. 00:28:35.485 [2024-12-10 12:36:57.621846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.485 [2024-12-10 12:36:57.621902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.485 [2024-12-10 12:36:57.621916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.485 [2024-12-10 12:36:57.621924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.486 [2024-12-10 12:36:57.621931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.486 [2024-12-10 12:36:57.621947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.486 qpair failed and we were unable to recover it. 00:28:35.486 [2024-12-10 12:36:57.631909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.486 [2024-12-10 12:36:57.632012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.486 [2024-12-10 12:36:57.632026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.486 [2024-12-10 12:36:57.632033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.486 [2024-12-10 12:36:57.632040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.486 [2024-12-10 12:36:57.632055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.486 qpair failed and we were unable to recover it. 00:28:35.486 [2024-12-10 12:36:57.641933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.486 [2024-12-10 12:36:57.642004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.486 [2024-12-10 12:36:57.642019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.486 [2024-12-10 12:36:57.642026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.486 [2024-12-10 12:36:57.642032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.486 [2024-12-10 12:36:57.642047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.486 qpair failed and we were unable to recover it. 00:28:35.745 [2024-12-10 12:36:57.651930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.745 [2024-12-10 12:36:57.651986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.745 [2024-12-10 12:36:57.652003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.745 [2024-12-10 12:36:57.652012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.745 [2024-12-10 12:36:57.652019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.745 [2024-12-10 12:36:57.652035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.745 qpair failed and we were unable to recover it. 00:28:35.745 [2024-12-10 12:36:57.661958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.745 [2024-12-10 12:36:57.662034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.745 [2024-12-10 12:36:57.662051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.745 [2024-12-10 12:36:57.662059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.745 [2024-12-10 12:36:57.662065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.745 [2024-12-10 12:36:57.662082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.745 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.671961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.672022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.672037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.672045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.672051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.672067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.682013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.682067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.682081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.682089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.682095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.682110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.692077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.692177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.692192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.692199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.692205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.692221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.702007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.702064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.702079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.702089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.702096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.702111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.712139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.712192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.712206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.712214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.712220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.712235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.722048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.722109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.722124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.722132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.722139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.722154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.732143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.732201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.732215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.732223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.732229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.732245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.742099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.742153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.742171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.742178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.742184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.742202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.752208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.752268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.752281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.752289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.752295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.752310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.762193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.762285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.762298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.762305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.762312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.762327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.772190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.772245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.772259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.772266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.772272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.772287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.782317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.782424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.782438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.782445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.782451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.782466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.792323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.746 [2024-12-10 12:36:57.792378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.746 [2024-12-10 12:36:57.792391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.746 [2024-12-10 12:36:57.792398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.746 [2024-12-10 12:36:57.792405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.746 [2024-12-10 12:36:57.792420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.746 qpair failed and we were unable to recover it. 00:28:35.746 [2024-12-10 12:36:57.802339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.802388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.802401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.802408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.802414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.802430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:35.747 [2024-12-10 12:36:57.812375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.812432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.812446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.812453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.812459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.812475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:35.747 [2024-12-10 12:36:57.822413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.822482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.822496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.822504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.822510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.822525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:35.747 [2024-12-10 12:36:57.832448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.832505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.832522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.832530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.832536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.832552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:35.747 [2024-12-10 12:36:57.842493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.842559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.842573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.842580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.842586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.842601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:35.747 [2024-12-10 12:36:57.852492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.852562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.852576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.852583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.852589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.852605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:35.747 [2024-12-10 12:36:57.862520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.862574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.862588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.862594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.862601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.862617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:35.747 [2024-12-10 12:36:57.872467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.872531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.872545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.872552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.872561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.872577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:35.747 [2024-12-10 12:36:57.882560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.882615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.882629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.882636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.882643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.882658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:35.747 [2024-12-10 12:36:57.892639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.892710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.892724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.892731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.892737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.892753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:35.747 [2024-12-10 12:36:57.902626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:35.747 [2024-12-10 12:36:57.902693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:35.747 [2024-12-10 12:36:57.902707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:35.747 [2024-12-10 12:36:57.902714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:35.747 [2024-12-10 12:36:57.902721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:35.747 [2024-12-10 12:36:57.902735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:35.747 qpair failed and we were unable to recover it. 00:28:36.007 [2024-12-10 12:36:57.912696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.007 [2024-12-10 12:36:57.912756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.007 [2024-12-10 12:36:57.912773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.007 [2024-12-10 12:36:57.912781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.007 [2024-12-10 12:36:57.912788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.007 [2024-12-10 12:36:57.912806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.007 qpair failed and we were unable to recover it. 00:28:36.007 [2024-12-10 12:36:57.922666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.007 [2024-12-10 12:36:57.922716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.007 [2024-12-10 12:36:57.922733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.007 [2024-12-10 12:36:57.922741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.007 [2024-12-10 12:36:57.922748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.007 [2024-12-10 12:36:57.922765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.007 qpair failed and we were unable to recover it. 00:28:36.007 [2024-12-10 12:36:57.932700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.007 [2024-12-10 12:36:57.932755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.007 [2024-12-10 12:36:57.932770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.007 [2024-12-10 12:36:57.932777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.007 [2024-12-10 12:36:57.932783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.007 [2024-12-10 12:36:57.932799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.007 qpair failed and we were unable to recover it. 00:28:36.007 [2024-12-10 12:36:57.942735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.007 [2024-12-10 12:36:57.942792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.007 [2024-12-10 12:36:57.942807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.007 [2024-12-10 12:36:57.942814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.007 [2024-12-10 12:36:57.942820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.007 [2024-12-10 12:36:57.942836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.007 qpair failed and we were unable to recover it. 00:28:36.007 [2024-12-10 12:36:57.952781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.007 [2024-12-10 12:36:57.952842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.007 [2024-12-10 12:36:57.952856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.007 [2024-12-10 12:36:57.952864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.007 [2024-12-10 12:36:57.952870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.007 [2024-12-10 12:36:57.952885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.007 qpair failed and we were unable to recover it. 00:28:36.007 [2024-12-10 12:36:57.962789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.007 [2024-12-10 12:36:57.962853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.007 [2024-12-10 12:36:57.962873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.007 [2024-12-10 12:36:57.962880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.007 [2024-12-10 12:36:57.962887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.007 [2024-12-10 12:36:57.962903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.007 qpair failed and we were unable to recover it. 00:28:36.007 [2024-12-10 12:36:57.972743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.007 [2024-12-10 12:36:57.972792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.007 [2024-12-10 12:36:57.972805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.007 [2024-12-10 12:36:57.972813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.007 [2024-12-10 12:36:57.972819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.007 [2024-12-10 12:36:57.972834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.007 qpair failed and we were unable to recover it. 00:28:36.007 [2024-12-10 12:36:57.982851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.007 [2024-12-10 12:36:57.982909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.007 [2024-12-10 12:36:57.982923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.007 [2024-12-10 12:36:57.982930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.007 [2024-12-10 12:36:57.982936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.007 [2024-12-10 12:36:57.982952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.007 qpair failed and we were unable to recover it. 00:28:36.007 [2024-12-10 12:36:57.992800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:57.992853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:57.992867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:57.992875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:57.992881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:57.992897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.002880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.002937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.002950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.002957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.002966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.002981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.012856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.012935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.012950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.012957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.012964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.012979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.022949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.023003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.023018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.023026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.023032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.023048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.032941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.032996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.033011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.033018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.033025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.033040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.042998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.043058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.043072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.043079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.043086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.043101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.053031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.053086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.053100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.053107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.053114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.053130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.063013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.063070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.063084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.063091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.063098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.063113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.073118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.073191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.073206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.073213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.073219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.073234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.083121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.083174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.083188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.083195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.083201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.083216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.093084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.093162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.093179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.093186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.093193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.093208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.103115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.103189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.103204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.103211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.103217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.103232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.113211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.113284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.008 [2024-12-10 12:36:58.113298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.008 [2024-12-10 12:36:58.113306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.008 [2024-12-10 12:36:58.113313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.008 [2024-12-10 12:36:58.113328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.008 qpair failed and we were unable to recover it. 00:28:36.008 [2024-12-10 12:36:58.123183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.008 [2024-12-10 12:36:58.123244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.009 [2024-12-10 12:36:58.123259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.009 [2024-12-10 12:36:58.123267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.009 [2024-12-10 12:36:58.123273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.009 [2024-12-10 12:36:58.123289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.009 qpair failed and we were unable to recover it. 00:28:36.009 [2024-12-10 12:36:58.133195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.009 [2024-12-10 12:36:58.133252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.009 [2024-12-10 12:36:58.133267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.009 [2024-12-10 12:36:58.133277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.009 [2024-12-10 12:36:58.133284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.009 [2024-12-10 12:36:58.133299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.009 qpair failed and we were unable to recover it. 00:28:36.009 [2024-12-10 12:36:58.143222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.009 [2024-12-10 12:36:58.143279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.009 [2024-12-10 12:36:58.143293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.009 [2024-12-10 12:36:58.143299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.009 [2024-12-10 12:36:58.143306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.009 [2024-12-10 12:36:58.143321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.009 qpair failed and we were unable to recover it. 00:28:36.009 [2024-12-10 12:36:58.153343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.009 [2024-12-10 12:36:58.153400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.009 [2024-12-10 12:36:58.153413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.009 [2024-12-10 12:36:58.153420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.009 [2024-12-10 12:36:58.153427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.009 [2024-12-10 12:36:58.153443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.009 qpair failed and we were unable to recover it. 00:28:36.009 [2024-12-10 12:36:58.163302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.009 [2024-12-10 12:36:58.163399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.009 [2024-12-10 12:36:58.163412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.009 [2024-12-10 12:36:58.163419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.009 [2024-12-10 12:36:58.163426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.009 [2024-12-10 12:36:58.163441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.009 qpair failed and we were unable to recover it. 00:28:36.268 [2024-12-10 12:36:58.173409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.173459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.173477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.173485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.173491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.173513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.183432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.183491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.183507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.183515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.183521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.183538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.193395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.193457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.193471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.193478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.193485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.193501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.203453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.203509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.203523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.203530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.203537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.203552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.213505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.213559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.213573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.213581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.213587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.213602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.223517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.223580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.223595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.223603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.223609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.223625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.233602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.233658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.233672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.233679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.233685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.233701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.243540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.243589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.243604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.243611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.243617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.243632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.253557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.253610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.253623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.253630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.253637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.253653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.263655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.263712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.263726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.263736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.263743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.263758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.273625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.273682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.273696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.273703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.273709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.273725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.283689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.283747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.283761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.283768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.283775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.283790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.293761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.269 [2024-12-10 12:36:58.293864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.269 [2024-12-10 12:36:58.293878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.269 [2024-12-10 12:36:58.293885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.269 [2024-12-10 12:36:58.293891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.269 [2024-12-10 12:36:58.293908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.269 qpair failed and we were unable to recover it. 00:28:36.269 [2024-12-10 12:36:58.303730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.303799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.303813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.303820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.303826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.303844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.313740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.313793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.313807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.313815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.313821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.313836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.323763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.323818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.323832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.323840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.323846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.323861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.333871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.333953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.333967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.333974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.333980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.333996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.343923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.343981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.343995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.344002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.344009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.344024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.353860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.353913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.353927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.353934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.353941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.353956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.363958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.364011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.364025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.364032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.364039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.364054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.373980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.374040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.374054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.374061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.374067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.374083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.384035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.384091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.384106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.384113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.384119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.384135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.394037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.394097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.394114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.394122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.394128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.394143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.404101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.404161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.404176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.404183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.404190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.404205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.414105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.414165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.414180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.414187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.414194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.414209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.270 [2024-12-10 12:36:58.424149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.270 [2024-12-10 12:36:58.424228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.270 [2024-12-10 12:36:58.424242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.270 [2024-12-10 12:36:58.424250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.270 [2024-12-10 12:36:58.424256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.270 [2024-12-10 12:36:58.424272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.270 qpair failed and we were unable to recover it. 00:28:36.530 [2024-12-10 12:36:58.434185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.530 [2024-12-10 12:36:58.434252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.530 [2024-12-10 12:36:58.434270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.530 [2024-12-10 12:36:58.434278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.530 [2024-12-10 12:36:58.434287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.530 [2024-12-10 12:36:58.434304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.530 qpair failed and we were unable to recover it. 00:28:36.530 [2024-12-10 12:36:58.444182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.530 [2024-12-10 12:36:58.444246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.530 [2024-12-10 12:36:58.444264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.530 [2024-12-10 12:36:58.444272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.530 [2024-12-10 12:36:58.444278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.530 [2024-12-10 12:36:58.444295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.530 qpair failed and we were unable to recover it. 00:28:36.530 [2024-12-10 12:36:58.454206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.530 [2024-12-10 12:36:58.454261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.530 [2024-12-10 12:36:58.454276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.530 [2024-12-10 12:36:58.454283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.530 [2024-12-10 12:36:58.454290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.530 [2024-12-10 12:36:58.454306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.530 qpair failed and we were unable to recover it. 00:28:36.530 [2024-12-10 12:36:58.464262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.530 [2024-12-10 12:36:58.464321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.530 [2024-12-10 12:36:58.464334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.530 [2024-12-10 12:36:58.464342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.530 [2024-12-10 12:36:58.464349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.530 [2024-12-10 12:36:58.464364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.530 qpair failed and we were unable to recover it. 00:28:36.530 [2024-12-10 12:36:58.474262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.530 [2024-12-10 12:36:58.474348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.530 [2024-12-10 12:36:58.474363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.530 [2024-12-10 12:36:58.474370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.530 [2024-12-10 12:36:58.474376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.530 [2024-12-10 12:36:58.474391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.530 qpair failed and we were unable to recover it. 00:28:36.530 [2024-12-10 12:36:58.484238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.530 [2024-12-10 12:36:58.484296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.530 [2024-12-10 12:36:58.484311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.530 [2024-12-10 12:36:58.484318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.530 [2024-12-10 12:36:58.484325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.530 [2024-12-10 12:36:58.484342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.530 qpair failed and we were unable to recover it. 00:28:36.530 [2024-12-10 12:36:58.494317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.530 [2024-12-10 12:36:58.494374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.530 [2024-12-10 12:36:58.494388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.530 [2024-12-10 12:36:58.494396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.530 [2024-12-10 12:36:58.494402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.530 [2024-12-10 12:36:58.494418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.530 qpair failed and we were unable to recover it. 00:28:36.530 [2024-12-10 12:36:58.504366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.530 [2024-12-10 12:36:58.504426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.530 [2024-12-10 12:36:58.504440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.530 [2024-12-10 12:36:58.504448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.530 [2024-12-10 12:36:58.504454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.504470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.514409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.514470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.514483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.514491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.514497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.514513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.524421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.524476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.524494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.524501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.524508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.524524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.534424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.534492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.534507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.534515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.534521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.534537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.544480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.544538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.544552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.544559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.544566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.544581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.554539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.554590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.554604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.554612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.554618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.554634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.564513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.564570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.564584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.564590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.564600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.564615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.574539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.574631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.574645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.574652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.574659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.574673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.584592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.584651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.584666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.584673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.584680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.584695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.594613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.594667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.594682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.594689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.594696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.594711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.604622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.604672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.604686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.604693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.604700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.604715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.614660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.614713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.614727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.614735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.614742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.614757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.624699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.624765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.624780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.624787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.624793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.624809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.634717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.531 [2024-12-10 12:36:58.634775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.531 [2024-12-10 12:36:58.634789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.531 [2024-12-10 12:36:58.634796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.531 [2024-12-10 12:36:58.634802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.531 [2024-12-10 12:36:58.634817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.531 qpair failed and we were unable to recover it. 00:28:36.531 [2024-12-10 12:36:58.644742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.532 [2024-12-10 12:36:58.644796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.532 [2024-12-10 12:36:58.644809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.532 [2024-12-10 12:36:58.644816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.532 [2024-12-10 12:36:58.644822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.532 [2024-12-10 12:36:58.644837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.532 qpair failed and we were unable to recover it. 00:28:36.532 [2024-12-10 12:36:58.654779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.532 [2024-12-10 12:36:58.654837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.532 [2024-12-10 12:36:58.654851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.532 [2024-12-10 12:36:58.654858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.532 [2024-12-10 12:36:58.654865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.532 [2024-12-10 12:36:58.654880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.532 qpair failed and we were unable to recover it. 00:28:36.532 [2024-12-10 12:36:58.664738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.532 [2024-12-10 12:36:58.664799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.532 [2024-12-10 12:36:58.664812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.532 [2024-12-10 12:36:58.664820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.532 [2024-12-10 12:36:58.664826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.532 [2024-12-10 12:36:58.664842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.532 qpair failed and we were unable to recover it. 00:28:36.532 [2024-12-10 12:36:58.674841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.532 [2024-12-10 12:36:58.674893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.532 [2024-12-10 12:36:58.674906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.532 [2024-12-10 12:36:58.674913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.532 [2024-12-10 12:36:58.674920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.532 [2024-12-10 12:36:58.674935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.532 qpair failed and we were unable to recover it. 00:28:36.532 [2024-12-10 12:36:58.684905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.532 [2024-12-10 12:36:58.684961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.532 [2024-12-10 12:36:58.684976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.532 [2024-12-10 12:36:58.684984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.532 [2024-12-10 12:36:58.684991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.532 [2024-12-10 12:36:58.685006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.532 qpair failed and we were unable to recover it. 00:28:36.791 [2024-12-10 12:36:58.694926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.791 [2024-12-10 12:36:58.695031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.791 [2024-12-10 12:36:58.695049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.791 [2024-12-10 12:36:58.695063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.791 [2024-12-10 12:36:58.695070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.791 [2024-12-10 12:36:58.695087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.791 qpair failed and we were unable to recover it. 00:28:36.791 [2024-12-10 12:36:58.704910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.705000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.705017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.705026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.705033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.792 [2024-12-10 12:36:58.705050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.792 qpair failed and we were unable to recover it. 00:28:36.792 [2024-12-10 12:36:58.714952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.715035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.715051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.715059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.715066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.792 [2024-12-10 12:36:58.715082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.792 qpair failed and we were unable to recover it. 00:28:36.792 [2024-12-10 12:36:58.724981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.725036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.725051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.725058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.725065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.792 [2024-12-10 12:36:58.725081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.792 qpair failed and we were unable to recover it. 00:28:36.792 [2024-12-10 12:36:58.735020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.735074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.735089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.735096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.735103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.792 [2024-12-10 12:36:58.735120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.792 qpair failed and we were unable to recover it. 00:28:36.792 [2024-12-10 12:36:58.745027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.745089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.745104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.745111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.745117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.792 [2024-12-10 12:36:58.745133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.792 qpair failed and we were unable to recover it. 00:28:36.792 [2024-12-10 12:36:58.755069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.755124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.755139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.755146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.755153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.792 [2024-12-10 12:36:58.755172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.792 qpair failed and we were unable to recover it. 00:28:36.792 [2024-12-10 12:36:58.765101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.765154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.765173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.765180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.765187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.792 [2024-12-10 12:36:58.765202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.792 qpair failed and we were unable to recover it. 00:28:36.792 [2024-12-10 12:36:58.775119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.775178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.775193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.775201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.775207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.792 [2024-12-10 12:36:58.775222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.792 qpair failed and we were unable to recover it. 00:28:36.792 [2024-12-10 12:36:58.785153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.785222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.785236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.785243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.785249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.792 [2024-12-10 12:36:58.785265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.792 qpair failed and we were unable to recover it. 00:28:36.792 [2024-12-10 12:36:58.795184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.795236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.795250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.795258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.795264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.792 [2024-12-10 12:36:58.795280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.792 qpair failed and we were unable to recover it. 00:28:36.792 [2024-12-10 12:36:58.805211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.792 [2024-12-10 12:36:58.805283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.792 [2024-12-10 12:36:58.805298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.792 [2024-12-10 12:36:58.805305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.792 [2024-12-10 12:36:58.805311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.805327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.815271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.815331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.815345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.815352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.815359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.815374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.825287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.825347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.825361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.825371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.825378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.825393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.835299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.835358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.835372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.835380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.835386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.835402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.845326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.845380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.845394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.845401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.845407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.845422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.855387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.855447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.855460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.855468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.855474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.855489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.865398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.865489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.865503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.865510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.865517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.865534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.875468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.875529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.875542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.875550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.875557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.875572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.885464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.885522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.885536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.885543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.885549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.885564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.895474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.895526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.895540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.895547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.895553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.895569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.905508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.905566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.905579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.905586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.905592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.793 [2024-12-10 12:36:58.905607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.793 qpair failed and we were unable to recover it. 00:28:36.793 [2024-12-10 12:36:58.915544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.793 [2024-12-10 12:36:58.915602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.793 [2024-12-10 12:36:58.915616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.793 [2024-12-10 12:36:58.915625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.793 [2024-12-10 12:36:58.915631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.794 [2024-12-10 12:36:58.915646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.794 qpair failed and we were unable to recover it. 00:28:36.794 [2024-12-10 12:36:58.925561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.794 [2024-12-10 12:36:58.925617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.794 [2024-12-10 12:36:58.925631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.794 [2024-12-10 12:36:58.925638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.794 [2024-12-10 12:36:58.925645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.794 [2024-12-10 12:36:58.925661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.794 qpair failed and we were unable to recover it. 00:28:36.794 [2024-12-10 12:36:58.935595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.794 [2024-12-10 12:36:58.935649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.794 [2024-12-10 12:36:58.935663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.794 [2024-12-10 12:36:58.935670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.794 [2024-12-10 12:36:58.935677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.794 [2024-12-10 12:36:58.935692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.794 qpair failed and we were unable to recover it. 00:28:36.794 [2024-12-10 12:36:58.945625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.794 [2024-12-10 12:36:58.945681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.794 [2024-12-10 12:36:58.945695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.794 [2024-12-10 12:36:58.945703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.794 [2024-12-10 12:36:58.945709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.794 [2024-12-10 12:36:58.945724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.794 qpair failed and we were unable to recover it. 00:28:36.794 [2024-12-10 12:36:58.955630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:36.794 [2024-12-10 12:36:58.955693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:36.794 [2024-12-10 12:36:58.955714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:36.794 [2024-12-10 12:36:58.955722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:36.794 [2024-12-10 12:36:58.955729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:36.794 [2024-12-10 12:36:58.955747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:36.794 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:58.965676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:58.965734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:58.965751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:58.965760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:58.965767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:58.965785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:58.975740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:58.975792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:58.975807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:58.975814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:58.975820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:58.975836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:58.985743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:58.985810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:58.985825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:58.985832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:58.985838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:58.985854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:58.995751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:58.995809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:58.995823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:58.995830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:58.995840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:58.995856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:59.005781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:59.005836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:59.005850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:59.005857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:59.005865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:59.005879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:59.015807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:59.015861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:59.015876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:59.015883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:59.015890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:59.015905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:59.025860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:59.025926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:59.025941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:59.025948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:59.025954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:59.025970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:59.035921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:59.036028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:59.036042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:59.036049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:59.036056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:59.036071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:59.045907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:59.045962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:59.045977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:59.045985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:59.045991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:59.046006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:59.055937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:59.055993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:59.056006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:59.056013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:59.056019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:59.056035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:59.065973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:59.066033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:59.066047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:59.066054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.054 [2024-12-10 12:36:59.066061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.054 [2024-12-10 12:36:59.066077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.054 qpair failed and we were unable to recover it. 00:28:37.054 [2024-12-10 12:36:59.076016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.054 [2024-12-10 12:36:59.076083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.054 [2024-12-10 12:36:59.076097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.054 [2024-12-10 12:36:59.076105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.076111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.076126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.086029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.086080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.086097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.086104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.086112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.086126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.096066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.096121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.096135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.096142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.096149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.096168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.106091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.106152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.106171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.106178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.106184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.106200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.116126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.116188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.116202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.116209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.116216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.116232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.126066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.126124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.126138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.126146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.126155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.126175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.136179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.136228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.136243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.136250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.136257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.136273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.146208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.146290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.146304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.146311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.146318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.146333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.156239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.156291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.156305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.156312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.156319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.156334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.166259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.166317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.166331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.166338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.166344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.166359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.176317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.176372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.176386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.176393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.176400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.176414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.186331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.186387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.186400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.186407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.186413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.186428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.196362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.196418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.196431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.196438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.196445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.055 [2024-12-10 12:36:59.196460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.055 qpair failed and we were unable to recover it. 00:28:37.055 [2024-12-10 12:36:59.206397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.055 [2024-12-10 12:36:59.206469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.055 [2024-12-10 12:36:59.206484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.055 [2024-12-10 12:36:59.206491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.055 [2024-12-10 12:36:59.206497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.056 [2024-12-10 12:36:59.206512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.056 qpair failed and we were unable to recover it. 00:28:37.056 [2024-12-10 12:36:59.216453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.056 [2024-12-10 12:36:59.216560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.056 [2024-12-10 12:36:59.216578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.056 [2024-12-10 12:36:59.216586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.056 [2024-12-10 12:36:59.216592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.056 [2024-12-10 12:36:59.216610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.056 qpair failed and we were unable to recover it. 00:28:37.315 [2024-12-10 12:36:59.226461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.315 [2024-12-10 12:36:59.226522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.315 [2024-12-10 12:36:59.226539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.315 [2024-12-10 12:36:59.226548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.315 [2024-12-10 12:36:59.226555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.226572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.236481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.236537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.236552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.236559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.236566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.236582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.246525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.246588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.246603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.246610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.246616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.246632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.256536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.256592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.256607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.256617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.256624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.256639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.266567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.266625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.266639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.266646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.266652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.266667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.276620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.276672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.276686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.276693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.276700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.276715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.286624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.286675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.286689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.286696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.286703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.286718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.296637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.296691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.296704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.296711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.296718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.296736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.306688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.306749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.306763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.306770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.306777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.306791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.316705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.316758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.316773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.316780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.316787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.316802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.326743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.326796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.326810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.326817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.326824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.326839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.336769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.336830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.336844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.336851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.336858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.336873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.346797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.346880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.346895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.346903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.346909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.346924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.356822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.316 [2024-12-10 12:36:59.356874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.316 [2024-12-10 12:36:59.356887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.316 [2024-12-10 12:36:59.356895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.316 [2024-12-10 12:36:59.356901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.316 [2024-12-10 12:36:59.356917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.316 qpair failed and we were unable to recover it. 00:28:37.316 [2024-12-10 12:36:59.366810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.366869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.366882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.366890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.366896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.366911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.376894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.376951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.376965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.376974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.376981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.376996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.386924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.386984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.387001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.387008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.387014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.387030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.396859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.396917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.396931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.396938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.396944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.396960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.406901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.406960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.406974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.406981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.406988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.407003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.416920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.416977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.416992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.417002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.417009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.417025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.427030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.427085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.427099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.427106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.427115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.427134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.437037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.437099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.437112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.437119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.437126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.437140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.446998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.447083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.447098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.447104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.447111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.447127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.457022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.457078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.457092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.457099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.457106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.457121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.467084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.467146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.467164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.467172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.467179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.467195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.317 [2024-12-10 12:36:59.477094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.317 [2024-12-10 12:36:59.477151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.317 [2024-12-10 12:36:59.477173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.317 [2024-12-10 12:36:59.477181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.317 [2024-12-10 12:36:59.477188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.317 [2024-12-10 12:36:59.477205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.317 qpair failed and we were unable to recover it. 00:28:37.577 [2024-12-10 12:36:59.487121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.577 [2024-12-10 12:36:59.487206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.577 [2024-12-10 12:36:59.487224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.577 [2024-12-10 12:36:59.487232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.577 [2024-12-10 12:36:59.487238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.577 [2024-12-10 12:36:59.487256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.577 qpair failed and we were unable to recover it. 00:28:37.577 [2024-12-10 12:36:59.497145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.577 [2024-12-10 12:36:59.497205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.577 [2024-12-10 12:36:59.497220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.577 [2024-12-10 12:36:59.497228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.577 [2024-12-10 12:36:59.497234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.577 [2024-12-10 12:36:59.497250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.577 qpair failed and we were unable to recover it. 00:28:37.577 [2024-12-10 12:36:59.507225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.577 [2024-12-10 12:36:59.507316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.577 [2024-12-10 12:36:59.507331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.577 [2024-12-10 12:36:59.507338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.577 [2024-12-10 12:36:59.507345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.577 [2024-12-10 12:36:59.507361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.577 qpair failed and we were unable to recover it. 00:28:37.577 [2024-12-10 12:36:59.517317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.577 [2024-12-10 12:36:59.517370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.517388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.517395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.517401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.517417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.527331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.527407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.527422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.527430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.527436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.527452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.537337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.537403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.537417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.537425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.537431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.537447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.547378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.547453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.547466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.547474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.547480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.547496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.557482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.557572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.557586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.557594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.557603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.557618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.567404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.567458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.567472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.567479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.567486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.567501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.577466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.577524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.577538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.577544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.577551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.577566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.587499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.587553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.587569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.587576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.587582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.587597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.597454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.597508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.597522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.597530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.597536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.597552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.607573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.607644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.607659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.607666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.607672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.607687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.617545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.617627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.617642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.617650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.617657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.617673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.627645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.627702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.627717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.627724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.627731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.627747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.637558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.637664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.637679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.637686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.637693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.578 [2024-12-10 12:36:59.637708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.578 qpair failed and we were unable to recover it. 00:28:37.578 [2024-12-10 12:36:59.647585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.578 [2024-12-10 12:36:59.647648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.578 [2024-12-10 12:36:59.647665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.578 [2024-12-10 12:36:59.647673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.578 [2024-12-10 12:36:59.647680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.579 [2024-12-10 12:36:59.647695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.579 qpair failed and we were unable to recover it. 00:28:37.579 [2024-12-10 12:36:59.657662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.579 [2024-12-10 12:36:59.657716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.579 [2024-12-10 12:36:59.657730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.579 [2024-12-10 12:36:59.657737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.579 [2024-12-10 12:36:59.657743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.579 [2024-12-10 12:36:59.657758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.579 qpair failed and we were unable to recover it. 00:28:37.579 [2024-12-10 12:36:59.667646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.579 [2024-12-10 12:36:59.667700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.579 [2024-12-10 12:36:59.667714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.579 [2024-12-10 12:36:59.667720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.579 [2024-12-10 12:36:59.667727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.579 [2024-12-10 12:36:59.667742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.579 qpair failed and we were unable to recover it. 00:28:37.579 [2024-12-10 12:36:59.677716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.579 [2024-12-10 12:36:59.677768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.579 [2024-12-10 12:36:59.677782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.579 [2024-12-10 12:36:59.677788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.579 [2024-12-10 12:36:59.677795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.579 [2024-12-10 12:36:59.677809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.579 qpair failed and we were unable to recover it. 00:28:37.579 [2024-12-10 12:36:59.687690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.579 [2024-12-10 12:36:59.687746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.579 [2024-12-10 12:36:59.687760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.579 [2024-12-10 12:36:59.687770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.579 [2024-12-10 12:36:59.687777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.579 [2024-12-10 12:36:59.687793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.579 qpair failed and we were unable to recover it. 00:28:37.579 [2024-12-10 12:36:59.697724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.579 [2024-12-10 12:36:59.697773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.579 [2024-12-10 12:36:59.697787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.579 [2024-12-10 12:36:59.697794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.579 [2024-12-10 12:36:59.697800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.579 [2024-12-10 12:36:59.697816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.579 qpair failed and we were unable to recover it. 00:28:37.579 [2024-12-10 12:36:59.707767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.579 [2024-12-10 12:36:59.707827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.579 [2024-12-10 12:36:59.707841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.579 [2024-12-10 12:36:59.707849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.579 [2024-12-10 12:36:59.707855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.579 [2024-12-10 12:36:59.707871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.579 qpair failed and we were unable to recover it. 00:28:37.579 [2024-12-10 12:36:59.717843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.579 [2024-12-10 12:36:59.717901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.579 [2024-12-10 12:36:59.717916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.579 [2024-12-10 12:36:59.717923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.579 [2024-12-10 12:36:59.717930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.579 [2024-12-10 12:36:59.717946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.579 qpair failed and we were unable to recover it. 00:28:37.579 [2024-12-10 12:36:59.727861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.579 [2024-12-10 12:36:59.727913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.579 [2024-12-10 12:36:59.727926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.579 [2024-12-10 12:36:59.727933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.579 [2024-12-10 12:36:59.727940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.579 [2024-12-10 12:36:59.727955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.579 qpair failed and we were unable to recover it. 00:28:37.579 [2024-12-10 12:36:59.737956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.579 [2024-12-10 12:36:59.738012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.579 [2024-12-10 12:36:59.738030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.579 [2024-12-10 12:36:59.738038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.579 [2024-12-10 12:36:59.738044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.579 [2024-12-10 12:36:59.738061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.579 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.747906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.747965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.839 [2024-12-10 12:36:59.747983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.839 [2024-12-10 12:36:59.747991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.839 [2024-12-10 12:36:59.747997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.839 [2024-12-10 12:36:59.748015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.839 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.757999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.758063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.839 [2024-12-10 12:36:59.758077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.839 [2024-12-10 12:36:59.758084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.839 [2024-12-10 12:36:59.758091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.839 [2024-12-10 12:36:59.758106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.839 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.767968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.768067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.839 [2024-12-10 12:36:59.768082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.839 [2024-12-10 12:36:59.768089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.839 [2024-12-10 12:36:59.768095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.839 [2024-12-10 12:36:59.768111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.839 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.777963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.778019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.839 [2024-12-10 12:36:59.778033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.839 [2024-12-10 12:36:59.778040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.839 [2024-12-10 12:36:59.778047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.839 [2024-12-10 12:36:59.778062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.839 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.788115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.788218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.839 [2024-12-10 12:36:59.788233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.839 [2024-12-10 12:36:59.788240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.839 [2024-12-10 12:36:59.788246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.839 [2024-12-10 12:36:59.788263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.839 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.798126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.798183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.839 [2024-12-10 12:36:59.798198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.839 [2024-12-10 12:36:59.798206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.839 [2024-12-10 12:36:59.798212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.839 [2024-12-10 12:36:59.798228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.839 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.808078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.808133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.839 [2024-12-10 12:36:59.808147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.839 [2024-12-10 12:36:59.808154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.839 [2024-12-10 12:36:59.808165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.839 [2024-12-10 12:36:59.808181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.839 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.818081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.818138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.839 [2024-12-10 12:36:59.818154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.839 [2024-12-10 12:36:59.818168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.839 [2024-12-10 12:36:59.818175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.839 [2024-12-10 12:36:59.818191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.839 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.828210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.828278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.839 [2024-12-10 12:36:59.828294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.839 [2024-12-10 12:36:59.828301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.839 [2024-12-10 12:36:59.828307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.839 [2024-12-10 12:36:59.828323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.839 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.838231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.838290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.839 [2024-12-10 12:36:59.838303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.839 [2024-12-10 12:36:59.838310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.839 [2024-12-10 12:36:59.838317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.839 [2024-12-10 12:36:59.838333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.839 qpair failed and we were unable to recover it. 00:28:37.839 [2024-12-10 12:36:59.848247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.839 [2024-12-10 12:36:59.848301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.848315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.848322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.848328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.848343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.858265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.858327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.858341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.858348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.858355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.858373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.868239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.868293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.868307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.868314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.868321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.868336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.878271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.878364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.878378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.878385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.878392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.878406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.888393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.888456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.888469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.888476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.888483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.888498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.898316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.898378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.898391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.898399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.898405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.898421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.908432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.908492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.908505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.908512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.908518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.908534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.918482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.918538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.918553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.918560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.918566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.918582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.928411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.928465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.928479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.928487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.928493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.928508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.938517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.938580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.938594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.938601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.938607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.938622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.948490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.948595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.948612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.948620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.948626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.948641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.958521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.958616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.958630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.958638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.958644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.958659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.968643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.968705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.968719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.968726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.968732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.840 [2024-12-10 12:36:59.968747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.840 qpair failed and we were unable to recover it. 00:28:37.840 [2024-12-10 12:36:59.978642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.840 [2024-12-10 12:36:59.978705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.840 [2024-12-10 12:36:59.978719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.840 [2024-12-10 12:36:59.978727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.840 [2024-12-10 12:36:59.978733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.841 [2024-12-10 12:36:59.978748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.841 qpair failed and we were unable to recover it. 00:28:37.841 [2024-12-10 12:36:59.988721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.841 [2024-12-10 12:36:59.988786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.841 [2024-12-10 12:36:59.988800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.841 [2024-12-10 12:36:59.988806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.841 [2024-12-10 12:36:59.988813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.841 [2024-12-10 12:36:59.988831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.841 qpair failed and we were unable to recover it. 00:28:37.841 [2024-12-10 12:36:59.998759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:37.841 [2024-12-10 12:36:59.998861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:37.841 [2024-12-10 12:36:59.998874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:37.841 [2024-12-10 12:36:59.998881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:37.841 [2024-12-10 12:36:59.998887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:37.841 [2024-12-10 12:36:59.998902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:37.841 qpair failed and we were unable to recover it. 00:28:38.100 [2024-12-10 12:37:00.008741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.100 [2024-12-10 12:37:00.008809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.100 [2024-12-10 12:37:00.008829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.100 [2024-12-10 12:37:00.008838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.100 [2024-12-10 12:37:00.008845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.100 [2024-12-10 12:37:00.008864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.100 qpair failed and we were unable to recover it. 00:28:38.100 [2024-12-10 12:37:00.018749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.100 [2024-12-10 12:37:00.018809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.100 [2024-12-10 12:37:00.018826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.100 [2024-12-10 12:37:00.018834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.100 [2024-12-10 12:37:00.018840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.100 [2024-12-10 12:37:00.018857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.100 qpair failed and we were unable to recover it. 00:28:38.100 [2024-12-10 12:37:00.028749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.028812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.028831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.028840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.028847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.028864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.038743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.038807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.038823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.038831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.038839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.038855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.048742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.048807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.048821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.048828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.048834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.048850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.058844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.058894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.058907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.058914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.058921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.058937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.068821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.068917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.068934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.068941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.068948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.068965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.078912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.078967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.078985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.078992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.078999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.079015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.088921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.088978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.088993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.089001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.089007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.089022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.098960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.099017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.099031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.099039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.099046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.099062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.108999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.109065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.109079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.109086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.109092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.109107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.119026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.119080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.119095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.119102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.119112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.119127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.129046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.129098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.129113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.129120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.129126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.129142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.139007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.139062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.139076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.139083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.139090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.139105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.149110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.149176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.101 [2024-12-10 12:37:00.149191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.101 [2024-12-10 12:37:00.149198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.101 [2024-12-10 12:37:00.149204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.101 [2024-12-10 12:37:00.149220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.101 qpair failed and we were unable to recover it. 00:28:38.101 [2024-12-10 12:37:00.159126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.101 [2024-12-10 12:37:00.159187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.159201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.159209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.159215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.159230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.102 [2024-12-10 12:37:00.169150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.102 [2024-12-10 12:37:00.169210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.169224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.169232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.169239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.169254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.102 [2024-12-10 12:37:00.179201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.102 [2024-12-10 12:37:00.179254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.179267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.179274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.179280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.179296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.102 [2024-12-10 12:37:00.189240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.102 [2024-12-10 12:37:00.189310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.189323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.189331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.189337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.189351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.102 [2024-12-10 12:37:00.199260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.102 [2024-12-10 12:37:00.199310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.199324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.199331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.199337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.199353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.102 [2024-12-10 12:37:00.209253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.102 [2024-12-10 12:37:00.209306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.209325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.209332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.209339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.209355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.102 [2024-12-10 12:37:00.219320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.102 [2024-12-10 12:37:00.219390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.219405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.219412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.219418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.219433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.102 [2024-12-10 12:37:00.229340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.102 [2024-12-10 12:37:00.229407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.229422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.229429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.229435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.229451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.102 [2024-12-10 12:37:00.239423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.102 [2024-12-10 12:37:00.239478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.239492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.239499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.239506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.239521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.102 [2024-12-10 12:37:00.249394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.102 [2024-12-10 12:37:00.249448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.249462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.249472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.249479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.249494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.102 [2024-12-10 12:37:00.259433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.102 [2024-12-10 12:37:00.259486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.102 [2024-12-10 12:37:00.259500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.102 [2024-12-10 12:37:00.259507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.102 [2024-12-10 12:37:00.259513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.102 [2024-12-10 12:37:00.259529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.102 qpair failed and we were unable to recover it. 00:28:38.362 [2024-12-10 12:37:00.269466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.362 [2024-12-10 12:37:00.269529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.362 [2024-12-10 12:37:00.269547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.362 [2024-12-10 12:37:00.269556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.362 [2024-12-10 12:37:00.269563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.362 [2024-12-10 12:37:00.269581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.362 qpair failed and we were unable to recover it. 00:28:38.362 [2024-12-10 12:37:00.279495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.362 [2024-12-10 12:37:00.279558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.362 [2024-12-10 12:37:00.279575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.362 [2024-12-10 12:37:00.279583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.362 [2024-12-10 12:37:00.279589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.362 [2024-12-10 12:37:00.279606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.362 qpair failed and we were unable to recover it. 00:28:38.362 [2024-12-10 12:37:00.289509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.362 [2024-12-10 12:37:00.289567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.362 [2024-12-10 12:37:00.289581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.362 [2024-12-10 12:37:00.289589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.362 [2024-12-10 12:37:00.289595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.362 [2024-12-10 12:37:00.289611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.362 qpair failed and we were unable to recover it. 00:28:38.362 [2024-12-10 12:37:00.299544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.362 [2024-12-10 12:37:00.299598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.362 [2024-12-10 12:37:00.299613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.362 [2024-12-10 12:37:00.299621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.362 [2024-12-10 12:37:00.299627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.362 [2024-12-10 12:37:00.299642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.362 qpair failed and we were unable to recover it. 00:28:38.362 [2024-12-10 12:37:00.309592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.362 [2024-12-10 12:37:00.309653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.362 [2024-12-10 12:37:00.309667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.362 [2024-12-10 12:37:00.309675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.362 [2024-12-10 12:37:00.309681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.362 [2024-12-10 12:37:00.309696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.362 qpair failed and we were unable to recover it. 00:28:38.362 [2024-12-10 12:37:00.319607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.362 [2024-12-10 12:37:00.319659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.362 [2024-12-10 12:37:00.319674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.362 [2024-12-10 12:37:00.319681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.362 [2024-12-10 12:37:00.319688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.362 [2024-12-10 12:37:00.319704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.362 qpair failed and we were unable to recover it. 00:28:38.362 [2024-12-10 12:37:00.329666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.362 [2024-12-10 12:37:00.329774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.362 [2024-12-10 12:37:00.329789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.329796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.329803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.363 [2024-12-10 12:37:00.329818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.339656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.339716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.339731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.339738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.339744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.363 [2024-12-10 12:37:00.339759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.349691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.349747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.349760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.349767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.349773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.363 [2024-12-10 12:37:00.349788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.359731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.359787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.359800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.359807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.359813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.363 [2024-12-10 12:37:00.359828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.369747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.369807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.369821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.369829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.369835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.363 [2024-12-10 12:37:00.369850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.379827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.379924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.379938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.379949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.379955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.363 [2024-12-10 12:37:00.379970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.389847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.389926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.389940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.389948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.389954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.363 [2024-12-10 12:37:00.389969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.399864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.399973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.399988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.399995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.400002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d0000b90 00:28:38.363 [2024-12-10 12:37:00.400018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.409879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.409984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.410039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.410064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.410084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88cc000b90 00:28:38.363 [2024-12-10 12:37:00.410137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.419893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.419978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.420005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.420020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.420032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88cc000b90 00:28:38.363 [2024-12-10 12:37:00.420070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.429944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.430260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.430319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.430346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.430367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d8000b90 00:28:38.363 [2024-12-10 12:37:00.430418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.439954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:38.363 [2024-12-10 12:37:00.440036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:38.363 [2024-12-10 12:37:00.440063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:38.363 [2024-12-10 12:37:00.440078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:38.363 [2024-12-10 12:37:00.440091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f88d8000b90 00:28:38.363 [2024-12-10 12:37:00.440122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:38.363 qpair failed and we were unable to recover it. 00:28:38.363 [2024-12-10 12:37:00.440249] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:38.363 A controller has encountered a failure and is being reset. 00:28:38.623 Controller properly reset. 00:28:38.623 Initializing NVMe Controllers 00:28:38.623 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:38.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:38.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:38.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:38.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:38.623 Initialization complete. Launching workers. 00:28:38.623 Starting thread on core 1 00:28:38.623 Starting thread on core 2 00:28:38.623 Starting thread on core 3 00:28:38.623 Starting thread on core 0 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:38.623 00:28:38.623 real 0m10.841s 00:28:38.623 user 0m19.492s 00:28:38.623 sys 0m4.820s 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:38.623 ************************************ 00:28:38.623 END TEST nvmf_target_disconnect_tc2 00:28:38.623 ************************************ 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:38.623 rmmod nvme_tcp 00:28:38.623 rmmod nvme_fabrics 00:28:38.623 rmmod nvme_keyring 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1791151 ']' 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1791151 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1791151 ']' 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1791151 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1791151 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1791151' 00:28:38.623 killing process with pid 1791151 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1791151 00:28:38.623 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1791151 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.883 12:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.419 12:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:41.419 00:28:41.419 real 0m19.650s 00:28:41.419 user 0m47.340s 00:28:41.419 sys 0m9.786s 00:28:41.419 12:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.419 12:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:41.419 ************************************ 00:28:41.419 END TEST nvmf_target_disconnect 00:28:41.419 ************************************ 00:28:41.419 12:37:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:41.419 00:28:41.419 real 5m50.830s 00:28:41.419 user 10m31.742s 00:28:41.419 sys 1m58.267s 00:28:41.419 12:37:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.419 12:37:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.419 ************************************ 00:28:41.419 END TEST nvmf_host 00:28:41.419 ************************************ 00:28:41.419 12:37:03 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:41.419 12:37:03 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:41.419 12:37:03 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:41.419 12:37:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:41.419 12:37:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.419 12:37:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:41.419 ************************************ 00:28:41.419 START TEST nvmf_target_core_interrupt_mode 00:28:41.419 ************************************ 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:41.419 * Looking for test storage... 00:28:41.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:41.419 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:41.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.420 --rc genhtml_branch_coverage=1 00:28:41.420 --rc genhtml_function_coverage=1 00:28:41.420 --rc genhtml_legend=1 00:28:41.420 --rc geninfo_all_blocks=1 00:28:41.420 --rc geninfo_unexecuted_blocks=1 00:28:41.420 00:28:41.420 ' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:41.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.420 --rc genhtml_branch_coverage=1 00:28:41.420 --rc genhtml_function_coverage=1 00:28:41.420 --rc genhtml_legend=1 00:28:41.420 --rc geninfo_all_blocks=1 00:28:41.420 --rc geninfo_unexecuted_blocks=1 00:28:41.420 00:28:41.420 ' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:41.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.420 --rc genhtml_branch_coverage=1 00:28:41.420 --rc genhtml_function_coverage=1 00:28:41.420 --rc genhtml_legend=1 00:28:41.420 --rc geninfo_all_blocks=1 00:28:41.420 --rc geninfo_unexecuted_blocks=1 00:28:41.420 00:28:41.420 ' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:41.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.420 --rc genhtml_branch_coverage=1 00:28:41.420 --rc genhtml_function_coverage=1 00:28:41.420 --rc genhtml_legend=1 00:28:41.420 --rc geninfo_all_blocks=1 00:28:41.420 --rc geninfo_unexecuted_blocks=1 00:28:41.420 00:28:41.420 ' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:41.420 ************************************ 00:28:41.420 START TEST nvmf_abort 00:28:41.420 ************************************ 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:41.420 * Looking for test storage... 00:28:41.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:41.420 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:41.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.421 --rc genhtml_branch_coverage=1 00:28:41.421 --rc genhtml_function_coverage=1 00:28:41.421 --rc genhtml_legend=1 00:28:41.421 --rc geninfo_all_blocks=1 00:28:41.421 --rc geninfo_unexecuted_blocks=1 00:28:41.421 00:28:41.421 ' 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:41.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.421 --rc genhtml_branch_coverage=1 00:28:41.421 --rc genhtml_function_coverage=1 00:28:41.421 --rc genhtml_legend=1 00:28:41.421 --rc geninfo_all_blocks=1 00:28:41.421 --rc geninfo_unexecuted_blocks=1 00:28:41.421 00:28:41.421 ' 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:41.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.421 --rc genhtml_branch_coverage=1 00:28:41.421 --rc genhtml_function_coverage=1 00:28:41.421 --rc genhtml_legend=1 00:28:41.421 --rc geninfo_all_blocks=1 00:28:41.421 --rc geninfo_unexecuted_blocks=1 00:28:41.421 00:28:41.421 ' 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:41.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.421 --rc genhtml_branch_coverage=1 00:28:41.421 --rc genhtml_function_coverage=1 00:28:41.421 --rc genhtml_legend=1 00:28:41.421 --rc geninfo_all_blocks=1 00:28:41.421 --rc geninfo_unexecuted_blocks=1 00:28:41.421 00:28:41.421 ' 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:41.421 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:41.680 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:41.681 12:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:46.962 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.962 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.963 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:46.963 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.222 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:47.222 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:47.222 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:47.222 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:47.223 Found net devices under 0000:86:00.0: cvl_0_0 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:47.223 Found net devices under 0000:86:00.1: cvl_0_1 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:47.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:28:47.223 00:28:47.223 --- 10.0.0.2 ping statistics --- 00:28:47.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.223 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:28:47.223 00:28:47.223 --- 10.0.0.1 ping statistics --- 00:28:47.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.223 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:47.223 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:47.482 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:47.482 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:47.482 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.482 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.482 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1795695 00:28:47.482 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1795695 00:28:47.482 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:47.482 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1795695 ']' 00:28:47.482 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.482 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.483 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.483 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.483 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.483 [2024-12-10 12:37:09.474563] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:47.483 [2024-12-10 12:37:09.475537] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:47.483 [2024-12-10 12:37:09.475577] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.483 [2024-12-10 12:37:09.555128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:47.483 [2024-12-10 12:37:09.597009] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.483 [2024-12-10 12:37:09.597045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.483 [2024-12-10 12:37:09.597054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.483 [2024-12-10 12:37:09.597060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.483 [2024-12-10 12:37:09.597065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.483 [2024-12-10 12:37:09.598502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.483 [2024-12-10 12:37:09.598612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.483 [2024-12-10 12:37:09.598612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.741 [2024-12-10 12:37:09.668156] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:47.741 [2024-12-10 12:37:09.669022] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:47.741 [2024-12-10 12:37:09.669204] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:47.741 [2024-12-10 12:37:09.669358] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.741 [2024-12-10 12:37:09.739472] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.741 Malloc0 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.741 Delay0 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.741 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.742 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:47.742 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.742 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.742 [2024-12-10 12:37:09.827351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.742 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.742 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:47.742 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.742 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.742 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.742 12:37:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:48.000 [2024-12-10 12:37:10.000309] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:49.973 Initializing NVMe Controllers 00:28:49.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:49.973 controller IO queue size 128 less than required 00:28:49.973 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:49.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:49.973 Initialization complete. Launching workers. 00:28:49.973 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36811 00:28:49.973 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36872, failed to submit 66 00:28:49.973 success 36811, unsuccessful 61, failed 0 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.973 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.973 rmmod nvme_tcp 00:28:49.973 rmmod nvme_fabrics 00:28:50.272 rmmod nvme_keyring 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1795695 ']' 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1795695 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1795695 ']' 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1795695 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795695 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795695' 00:28:50.272 killing process with pid 1795695 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1795695 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1795695 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.272 12:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.809 00:28:52.809 real 0m11.079s 00:28:52.809 user 0m10.413s 00:28:52.809 sys 0m5.696s 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:52.809 ************************************ 00:28:52.809 END TEST nvmf_abort 00:28:52.809 ************************************ 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:52.809 ************************************ 00:28:52.809 START TEST nvmf_ns_hotplug_stress 00:28:52.809 ************************************ 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:52.809 * Looking for test storage... 00:28:52.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.809 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:52.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.810 --rc genhtml_branch_coverage=1 00:28:52.810 --rc genhtml_function_coverage=1 00:28:52.810 --rc genhtml_legend=1 00:28:52.810 --rc geninfo_all_blocks=1 00:28:52.810 --rc geninfo_unexecuted_blocks=1 00:28:52.810 00:28:52.810 ' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:52.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.810 --rc genhtml_branch_coverage=1 00:28:52.810 --rc genhtml_function_coverage=1 00:28:52.810 --rc genhtml_legend=1 00:28:52.810 --rc geninfo_all_blocks=1 00:28:52.810 --rc geninfo_unexecuted_blocks=1 00:28:52.810 00:28:52.810 ' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:52.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.810 --rc genhtml_branch_coverage=1 00:28:52.810 --rc genhtml_function_coverage=1 00:28:52.810 --rc genhtml_legend=1 00:28:52.810 --rc geninfo_all_blocks=1 00:28:52.810 --rc geninfo_unexecuted_blocks=1 00:28:52.810 00:28:52.810 ' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:52.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.810 --rc genhtml_branch_coverage=1 00:28:52.810 --rc genhtml_function_coverage=1 00:28:52.810 --rc genhtml_legend=1 00:28:52.810 --rc geninfo_all_blocks=1 00:28:52.810 --rc geninfo_unexecuted_blocks=1 00:28:52.810 00:28:52.810 ' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.810 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.811 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.811 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.811 12:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:59.382 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.382 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.382 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.382 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.382 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.382 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.382 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.382 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.382 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.382 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:59.383 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:59.383 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:59.383 Found net devices under 0000:86:00.0: cvl_0_0 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:59.383 Found net devices under 0000:86:00.1: cvl_0_1 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.383 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:28:59.384 00:28:59.384 --- 10.0.0.2 ping statistics --- 00:28:59.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.384 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:28:59.384 00:28:59.384 --- 10.0.0.1 ping statistics --- 00:28:59.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.384 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1799688 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1799688 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1799688 ']' 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:59.384 [2024-12-10 12:37:20.696010] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:59.384 [2024-12-10 12:37:20.697041] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:28:59.384 [2024-12-10 12:37:20.697078] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.384 [2024-12-10 12:37:20.778192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.384 [2024-12-10 12:37:20.818774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.384 [2024-12-10 12:37:20.818813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.384 [2024-12-10 12:37:20.818820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.384 [2024-12-10 12:37:20.818827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.384 [2024-12-10 12:37:20.818834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.384 [2024-12-10 12:37:20.820152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.384 [2024-12-10 12:37:20.820260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.384 [2024-12-10 12:37:20.820261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.384 [2024-12-10 12:37:20.889639] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:59.384 [2024-12-10 12:37:20.890498] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:59.384 [2024-12-10 12:37:20.890731] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:59.384 [2024-12-10 12:37:20.890863] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.384 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.385 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:59.385 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.385 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:59.385 12:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:59.385 [2024-12-10 12:37:21.141068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.385 12:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:59.385 12:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.385 [2024-12-10 12:37:21.541587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.644 12:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:59.644 12:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:59.903 Malloc0 00:28:59.903 12:37:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:00.162 Delay0 00:29:00.162 12:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.420 12:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:00.420 NULL1 00:29:00.420 12:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:00.677 12:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1800045 00:29:00.678 12:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:00.678 12:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:00.678 12:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.935 12:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.193 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:01.193 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:01.193 true 00:29:01.193 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:01.193 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.451 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.709 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:01.709 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:01.967 true 00:29:01.967 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:01.967 12:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.967 12:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.225 12:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:02.225 12:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:02.483 true 00:29:02.483 12:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:02.483 12:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.741 12:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.999 12:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:02.999 12:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:02.999 true 00:29:02.999 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:02.999 12:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.372 Read completed with error (sct=0, sc=11) 00:29:04.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.372 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:04.372 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:04.372 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:04.629 true 00:29:04.629 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:04.629 12:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.563 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:05.563 12:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.563 12:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:05.563 12:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:05.821 true 00:29:05.821 12:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:05.821 12:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.078 12:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:06.337 12:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:06.337 12:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:06.337 true 00:29:06.337 12:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:06.337 12:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.710 12:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.710 12:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:07.710 12:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:07.968 true 00:29:07.968 12:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:07.968 12:37:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.902 12:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.902 12:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:08.902 12:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:09.159 true 00:29:09.159 12:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:09.159 12:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.417 12:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:09.417 12:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:09.417 12:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:09.675 true 00:29:09.675 12:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:09.675 12:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.047 12:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.047 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:11.047 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:11.047 true 00:29:11.047 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:11.047 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.305 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.562 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:11.563 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:11.819 true 00:29:11.819 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:11.819 12:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:12.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:12.750 12:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.014 12:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:13.014 12:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:13.273 true 00:29:13.273 12:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:13.273 12:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.206 12:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.206 12:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:14.206 12:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:14.464 true 00:29:14.464 12:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:14.464 12:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.722 12:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.980 12:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:14.980 12:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:14.980 true 00:29:14.980 12:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:14.980 12:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:16.352 12:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:16.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:16.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:16.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:16.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:16.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:16.352 12:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:16.352 12:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:16.610 true 00:29:16.610 12:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:16.610 12:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.543 12:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:17.543 12:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:17.543 12:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:17.800 true 00:29:17.800 12:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:17.800 12:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:18.058 12:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:18.058 12:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:18.058 12:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:18.316 true 00:29:18.316 12:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:18.316 12:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.688 12:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:19.689 12:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:19.689 12:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:19.946 true 00:29:19.946 12:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:19.946 12:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:20.879 12:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:20.879 12:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:20.879 12:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:21.136 true 00:29:21.136 12:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:21.137 12:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.394 12:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:21.394 12:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:21.394 12:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:21.652 true 00:29:21.652 12:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:21.652 12:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.024 12:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.024 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:23.025 12:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:23.025 12:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:23.281 true 00:29:23.281 12:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:23.281 12:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.215 12:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:24.215 12:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:24.215 12:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:24.473 true 00:29:24.473 12:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:24.473 12:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:24.730 12:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:24.730 12:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:24.730 12:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:24.988 true 00:29:24.988 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:24.988 12:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:26.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.361 12:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:26.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.361 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:26.361 12:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:26.361 12:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:26.619 true 00:29:26.619 12:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:26.619 12:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:27.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.552 12:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:27.552 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:27.552 12:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:27.552 12:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:27.810 true 00:29:27.810 12:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:27.810 12:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:28.067 12:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:28.067 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:28.067 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:28.325 true 00:29:28.325 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:28.325 12:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:29.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.698 12:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:29.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:29.698 12:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:29.698 12:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:29.956 true 00:29:29.956 12:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:29.956 12:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:30.890 12:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.890 12:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:29:30.890 12:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:29:31.148 true 00:29:31.148 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:31.148 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.148 Initializing NVMe Controllers 00:29:31.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:31.148 Controller IO queue size 128, less than required. 00:29:31.148 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.148 Controller IO queue size 128, less than required. 00:29:31.148 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:31.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:31.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:31.148 Initialization complete. Launching workers. 00:29:31.148 ======================================================== 00:29:31.148 Latency(us) 00:29:31.148 Device Information : IOPS MiB/s Average min max 00:29:31.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2036.84 0.99 41177.42 2822.45 1013554.61 00:29:31.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17055.02 8.33 7481.88 1595.31 306812.66 00:29:31.148 ======================================================== 00:29:31.148 Total : 19091.86 9.32 11076.74 1595.31 1013554.61 00:29:31.148 00:29:31.406 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:31.406 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:29:31.406 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:29:31.696 true 00:29:31.696 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1800045 00:29:31.696 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1800045) - No such process 00:29:31.696 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1800045 00:29:31.696 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.954 12:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:32.212 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:32.212 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:32.212 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:32.212 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:32.212 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:32.212 null0 00:29:32.212 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:32.212 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:32.212 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:32.470 null1 00:29:32.470 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:32.470 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:32.470 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:32.729 null2 00:29:32.729 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:32.729 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:32.729 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:32.989 null3 00:29:32.989 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:32.989 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:32.989 12:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:32.989 null4 00:29:32.989 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:32.989 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:32.989 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:33.255 null5 00:29:33.255 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:33.255 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:33.255 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:33.517 null6 00:29:33.517 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:33.517 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:33.517 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:33.775 null7 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:33.775 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1805412 1805414 1805417 1805420 1805423 1805426 1805429 1805432 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:33.776 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.035 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.035 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.035 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.035 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.035 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.035 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.035 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.035 12:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.035 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.293 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.293 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.294 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.294 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:34.294 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.294 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.294 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.294 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:34.552 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:34.811 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:34.811 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:34.811 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:34.811 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.811 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:34.811 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:34.811 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:34.811 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.070 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.070 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.070 12:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.070 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.329 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:35.587 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:35.587 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:35.587 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:35.587 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:35.587 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:35.587 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.587 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:35.587 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:35.846 12:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:36.105 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.106 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:36.364 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:36.364 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.364 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:36.364 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:36.364 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:36.364 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:36.364 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.364 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:36.623 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:36.881 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:36.881 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:36.881 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:36.881 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:36.881 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:36.881 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:36.881 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.881 12:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:37.140 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.399 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:37.657 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:37.657 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:37.658 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:37.658 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:37.658 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:37.658 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.658 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:37.658 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.916 rmmod nvme_tcp 00:29:37.916 rmmod nvme_fabrics 00:29:37.916 rmmod nvme_keyring 00:29:37.916 12:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.916 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:37.916 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:37.916 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1799688 ']' 00:29:37.916 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1799688 00:29:37.916 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1799688 ']' 00:29:37.916 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1799688 00:29:37.916 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:37.917 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.917 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1799688 00:29:37.917 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:37.917 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:37.917 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1799688' 00:29:37.917 killing process with pid 1799688 00:29:37.917 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1799688 00:29:37.917 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1799688 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.176 12:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.200 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:40.200 00:29:40.200 real 0m47.776s 00:29:40.200 user 2m59.171s 00:29:40.200 sys 0m19.738s 00:29:40.200 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.200 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:40.200 ************************************ 00:29:40.200 END TEST nvmf_ns_hotplug_stress 00:29:40.200 ************************************ 00:29:40.200 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:40.200 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:40.200 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.200 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:40.461 ************************************ 00:29:40.461 START TEST nvmf_delete_subsystem 00:29:40.461 ************************************ 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:40.461 * Looking for test storage... 00:29:40.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:40.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.461 --rc genhtml_branch_coverage=1 00:29:40.461 --rc genhtml_function_coverage=1 00:29:40.461 --rc genhtml_legend=1 00:29:40.461 --rc geninfo_all_blocks=1 00:29:40.461 --rc geninfo_unexecuted_blocks=1 00:29:40.461 00:29:40.461 ' 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:40.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.461 --rc genhtml_branch_coverage=1 00:29:40.461 --rc genhtml_function_coverage=1 00:29:40.461 --rc genhtml_legend=1 00:29:40.461 --rc geninfo_all_blocks=1 00:29:40.461 --rc geninfo_unexecuted_blocks=1 00:29:40.461 00:29:40.461 ' 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:40.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.461 --rc genhtml_branch_coverage=1 00:29:40.461 --rc genhtml_function_coverage=1 00:29:40.461 --rc genhtml_legend=1 00:29:40.461 --rc geninfo_all_blocks=1 00:29:40.461 --rc geninfo_unexecuted_blocks=1 00:29:40.461 00:29:40.461 ' 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:40.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.461 --rc genhtml_branch_coverage=1 00:29:40.461 --rc genhtml_function_coverage=1 00:29:40.461 --rc genhtml_legend=1 00:29:40.461 --rc geninfo_all_blocks=1 00:29:40.461 --rc geninfo_unexecuted_blocks=1 00:29:40.461 00:29:40.461 ' 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.461 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:40.462 12:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:47.034 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:47.034 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:47.034 Found net devices under 0000:86:00.0: cvl_0_0 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.034 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:47.035 Found net devices under 0000:86:00.1: cvl_0_1 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:29:47.035 00:29:47.035 --- 10.0.0.2 ping statistics --- 00:29:47.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.035 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:29:47.035 00:29:47.035 --- 10.0.0.1 ping statistics --- 00:29:47.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.035 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1809687 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1809687 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1809687 ']' 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.035 [2024-12-10 12:38:08.504432] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:47.035 [2024-12-10 12:38:08.505403] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:29:47.035 [2024-12-10 12:38:08.505443] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.035 [2024-12-10 12:38:08.589053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:47.035 [2024-12-10 12:38:08.629566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.035 [2024-12-10 12:38:08.629603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.035 [2024-12-10 12:38:08.629610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.035 [2024-12-10 12:38:08.629616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.035 [2024-12-10 12:38:08.629621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.035 [2024-12-10 12:38:08.630761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.035 [2024-12-10 12:38:08.630762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.035 [2024-12-10 12:38:08.699820] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:47.035 [2024-12-10 12:38:08.700345] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:47.035 [2024-12-10 12:38:08.700554] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.035 [2024-12-10 12:38:08.767607] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.035 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.036 [2024-12-10 12:38:08.799910] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.036 NULL1 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.036 Delay0 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1809868 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:47.036 12:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:47.036 [2024-12-10 12:38:08.915150] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:48.938 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.938 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.938 12:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 starting I/O failed: -6 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 [2024-12-10 12:38:11.169779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168e4a0 is same with the state(6) to be set 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Write completed with error (sct=0, sc=8) 00:29:49.197 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 [2024-12-10 12:38:11.170505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168e680 is same with the state(6) to be set 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 starting I/O failed: -6 00:29:49.198 [2024-12-10 12:38:11.170844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6d8400d390 is same with the state(6) to be set 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:49.198 Write completed with error (sct=0, sc=8) 00:29:49.198 Read completed with error (sct=0, sc=8) 00:29:50.135 [2024-12-10 12:38:12.134146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168f9b0 is same with the state(6) to be set 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 [2024-12-10 12:38:12.173166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6d8400d060 is same with the state(6) to be set 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 [2024-12-10 12:38:12.173878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6d8400d6c0 is same with the state(6) to be set 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 [2024-12-10 12:38:12.173989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168e860 is same with the state(6) to be set 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Write completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 Read completed with error (sct=0, sc=8) 00:29:50.135 [2024-12-10 12:38:12.174696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168e2c0 is same with the state(6) to be set 00:29:50.135 Initializing NVMe Controllers 00:29:50.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:50.135 Controller IO queue size 128, less than required. 00:29:50.135 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:50.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:50.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:50.135 Initialization complete. Launching workers. 00:29:50.135 ======================================================== 00:29:50.135 Latency(us) 00:29:50.135 Device Information : IOPS MiB/s Average min max 00:29:50.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.80 0.08 906009.72 728.65 1012496.55 00:29:50.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.82 0.08 912074.02 251.70 1043509.77 00:29:50.135 ======================================================== 00:29:50.135 Total : 327.62 0.16 909023.49 251.70 1043509.77 00:29:50.135 00:29:50.135 [2024-12-10 12:38:12.175537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168f9b0 (9): Bad file descriptor 00:29:50.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:50.135 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.135 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:50.135 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1809868 00:29:50.135 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1809868 00:29:50.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1809868) - No such process 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1809868 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1809868 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1809868 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:50.704 [2024-12-10 12:38:12.707886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1810397 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1810397 00:29:50.704 12:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:50.704 [2024-12-10 12:38:12.790876] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:51.271 12:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:51.271 12:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1810397 00:29:51.271 12:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:51.838 12:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:51.838 12:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1810397 00:29:51.838 12:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.096 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.096 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1810397 00:29:52.096 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:52.663 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:52.663 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1810397 00:29:52.663 12:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:53.229 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:53.229 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1810397 00:29:53.229 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:53.795 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:53.795 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1810397 00:29:53.795 12:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:53.795 Initializing NVMe Controllers 00:29:53.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:53.795 Controller IO queue size 128, less than required. 00:29:53.795 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:53.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:53.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:53.795 Initialization complete. Launching workers. 00:29:53.795 ======================================================== 00:29:53.795 Latency(us) 00:29:53.795 Device Information : IOPS MiB/s Average min max 00:29:53.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002116.80 1000158.67 1005816.24 00:29:53.795 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004999.34 1000248.15 1043444.21 00:29:53.795 ======================================================== 00:29:53.795 Total : 256.00 0.12 1003558.07 1000158.67 1043444.21 00:29:53.795 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1810397 00:29:54.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1810397) - No such process 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1810397 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:54.363 rmmod nvme_tcp 00:29:54.363 rmmod nvme_fabrics 00:29:54.363 rmmod nvme_keyring 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1809687 ']' 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1809687 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1809687 ']' 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1809687 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1809687 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1809687' 00:29:54.363 killing process with pid 1809687 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1809687 00:29:54.363 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1809687 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.622 12:38:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.528 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:56.528 00:29:56.528 real 0m16.214s 00:29:56.528 user 0m26.730s 00:29:56.528 sys 0m5.857s 00:29:56.528 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:56.528 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:56.528 ************************************ 00:29:56.528 END TEST nvmf_delete_subsystem 00:29:56.528 ************************************ 00:29:56.528 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:56.528 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:56.528 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:56.528 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:56.528 ************************************ 00:29:56.528 START TEST nvmf_host_management 00:29:56.528 ************************************ 00:29:56.528 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:56.787 * Looking for test storage... 00:29:56.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:56.787 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:56.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.788 --rc genhtml_branch_coverage=1 00:29:56.788 --rc genhtml_function_coverage=1 00:29:56.788 --rc genhtml_legend=1 00:29:56.788 --rc geninfo_all_blocks=1 00:29:56.788 --rc geninfo_unexecuted_blocks=1 00:29:56.788 00:29:56.788 ' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:56.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.788 --rc genhtml_branch_coverage=1 00:29:56.788 --rc genhtml_function_coverage=1 00:29:56.788 --rc genhtml_legend=1 00:29:56.788 --rc geninfo_all_blocks=1 00:29:56.788 --rc geninfo_unexecuted_blocks=1 00:29:56.788 00:29:56.788 ' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:56.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.788 --rc genhtml_branch_coverage=1 00:29:56.788 --rc genhtml_function_coverage=1 00:29:56.788 --rc genhtml_legend=1 00:29:56.788 --rc geninfo_all_blocks=1 00:29:56.788 --rc geninfo_unexecuted_blocks=1 00:29:56.788 00:29:56.788 ' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:56.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:56.788 --rc genhtml_branch_coverage=1 00:29:56.788 --rc genhtml_function_coverage=1 00:29:56.788 --rc genhtml_legend=1 00:29:56.788 --rc geninfo_all_blocks=1 00:29:56.788 --rc geninfo_unexecuted_blocks=1 00:29:56.788 00:29:56.788 ' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.788 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.789 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.789 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:56.789 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:56.789 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:56.789 12:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:03.359 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:03.359 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:03.359 Found net devices under 0000:86:00.0: cvl_0_0 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.359 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:03.359 Found net devices under 0000:86:00.1: cvl_0_1 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:03.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:30:03.360 00:30:03.360 --- 10.0.0.2 ping statistics --- 00:30:03.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.360 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:30:03.360 00:30:03.360 --- 10.0.0.1 ping statistics --- 00:30:03.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.360 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1814569 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1814569 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1814569 ']' 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:03.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.360 12:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.360 [2024-12-10 12:38:24.850343] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:03.360 [2024-12-10 12:38:24.851264] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:30:03.360 [2024-12-10 12:38:24.851297] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:03.360 [2024-12-10 12:38:24.932584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:03.360 [2024-12-10 12:38:24.974326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:03.360 [2024-12-10 12:38:24.974362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:03.360 [2024-12-10 12:38:24.974369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:03.360 [2024-12-10 12:38:24.974376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:03.360 [2024-12-10 12:38:24.974381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:03.360 [2024-12-10 12:38:24.975840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:03.360 [2024-12-10 12:38:24.975949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:03.360 [2024-12-10 12:38:24.976056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.360 [2024-12-10 12:38:24.976057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:03.360 [2024-12-10 12:38:25.044288] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:03.360 [2024-12-10 12:38:25.045167] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:03.360 [2024-12-10 12:38:25.045372] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:03.360 [2024-12-10 12:38:25.045837] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:03.360 [2024-12-10 12:38:25.045890] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.620 [2024-12-10 12:38:25.732738] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.620 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.879 Malloc0 00:30:03.879 [2024-12-10 12:38:25.829038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1814660 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1814660 /var/tmp/bdevperf.sock 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1814660 ']' 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:03.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:03.879 { 00:30:03.879 "params": { 00:30:03.879 "name": "Nvme$subsystem", 00:30:03.879 "trtype": "$TEST_TRANSPORT", 00:30:03.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:03.879 "adrfam": "ipv4", 00:30:03.879 "trsvcid": "$NVMF_PORT", 00:30:03.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:03.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:03.879 "hdgst": ${hdgst:-false}, 00:30:03.879 "ddgst": ${ddgst:-false} 00:30:03.879 }, 00:30:03.879 "method": "bdev_nvme_attach_controller" 00:30:03.879 } 00:30:03.879 EOF 00:30:03.879 )") 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:03.879 12:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:03.879 "params": { 00:30:03.879 "name": "Nvme0", 00:30:03.879 "trtype": "tcp", 00:30:03.879 "traddr": "10.0.0.2", 00:30:03.879 "adrfam": "ipv4", 00:30:03.879 "trsvcid": "4420", 00:30:03.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:03.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:03.879 "hdgst": false, 00:30:03.879 "ddgst": false 00:30:03.879 }, 00:30:03.879 "method": "bdev_nvme_attach_controller" 00:30:03.879 }' 00:30:03.879 [2024-12-10 12:38:25.930167] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:30:03.879 [2024-12-10 12:38:25.930222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814660 ] 00:30:03.879 [2024-12-10 12:38:26.007935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.138 [2024-12-10 12:38:26.048816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.138 Running I/O for 10 seconds... 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:04.138 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:04.397 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:04.397 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:04.397 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.397 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.397 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.397 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=92 00:30:04.397 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 92 -ge 100 ']' 00:30:04.397 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.657 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.657 [2024-12-10 12:38:26.640473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e5c1b0 is same with the state(6) to be set 00:30:04.658 [2024-12-10 12:38:26.640970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.658 [2024-12-10 12:38:26.641003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.641013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.658 [2024-12-10 12:38:26.641020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.641028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.658 [2024-12-10 12:38:26.641035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.641042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:04.658 [2024-12-10 12:38:26.641049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.641056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e01a0 is same with the state(6) to be set 00:30:04.658 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.658 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:04.658 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.658 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:04.658 [2024-12-10 12:38:26.653625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.653986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.653994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.654002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.654009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.654017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.654024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.654031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.654040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.654049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.658 [2024-12-10 12:38:26.654056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.658 [2024-12-10 12:38:26.654064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:04.659 [2024-12-10 12:38:26.654617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:04.659 [2024-12-10 12:38:26.654645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:04.659 [2024-12-10 12:38:26.654729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e01a0 (9): Bad file descriptor 00:30:04.659 [2024-12-10 12:38:26.655616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:04.659 task offset: 98304 on job bdev=Nvme0n1 fails 00:30:04.659 00:30:04.659 Latency(us) 00:30:04.659 [2024-12-10T11:38:26.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.659 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:04.660 Job: Nvme0n1 ended in about 0.41 seconds with error 00:30:04.660 Verification LBA range: start 0x0 length 0x400 00:30:04.660 Nvme0n1 : 0.41 1880.90 117.56 156.74 0.00 30562.72 1424.70 27582.11 00:30:04.660 [2024-12-10T11:38:26.828Z] =================================================================================================================== 00:30:04.660 [2024-12-10T11:38:26.828Z] Total : 1880.90 117.56 156.74 0.00 30562.72 1424.70 27582.11 00:30:04.660 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.660 12:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:04.660 [2024-12-10 12:38:26.658000] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:04.660 [2024-12-10 12:38:26.661137] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1814660 00:30:05.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1814660) - No such process 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:05.596 { 00:30:05.596 "params": { 00:30:05.596 "name": "Nvme$subsystem", 00:30:05.596 "trtype": "$TEST_TRANSPORT", 00:30:05.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.596 "adrfam": "ipv4", 00:30:05.596 "trsvcid": "$NVMF_PORT", 00:30:05.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.596 "hdgst": ${hdgst:-false}, 00:30:05.596 "ddgst": ${ddgst:-false} 00:30:05.596 }, 00:30:05.596 "method": "bdev_nvme_attach_controller" 00:30:05.596 } 00:30:05.596 EOF 00:30:05.596 )") 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:05.596 12:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:05.596 "params": { 00:30:05.596 "name": "Nvme0", 00:30:05.596 "trtype": "tcp", 00:30:05.596 "traddr": "10.0.0.2", 00:30:05.596 "adrfam": "ipv4", 00:30:05.596 "trsvcid": "4420", 00:30:05.596 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:05.596 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:05.596 "hdgst": false, 00:30:05.596 "ddgst": false 00:30:05.596 }, 00:30:05.596 "method": "bdev_nvme_attach_controller" 00:30:05.596 }' 00:30:05.596 [2024-12-10 12:38:27.711605] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:30:05.596 [2024-12-10 12:38:27.711652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1815078 ] 00:30:05.854 [2024-12-10 12:38:27.770431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:05.854 [2024-12-10 12:38:27.810957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.854 Running I/O for 1 seconds... 00:30:07.233 1984.00 IOPS, 124.00 MiB/s 00:30:07.233 Latency(us) 00:30:07.233 [2024-12-10T11:38:29.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.233 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:07.233 Verification LBA range: start 0x0 length 0x400 00:30:07.233 Nvme0n1 : 1.01 2027.75 126.73 0.00 0.00 31059.31 4530.53 27696.08 00:30:07.233 [2024-12-10T11:38:29.401Z] =================================================================================================================== 00:30:07.233 [2024-12-10T11:38:29.401Z] Total : 2027.75 126.73 0.00 0.00 31059.31 4530.53 27696.08 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.233 rmmod nvme_tcp 00:30:07.233 rmmod nvme_fabrics 00:30:07.233 rmmod nvme_keyring 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1814569 ']' 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1814569 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1814569 ']' 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1814569 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1814569 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1814569' 00:30:07.233 killing process with pid 1814569 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1814569 00:30:07.233 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1814569 00:30:07.492 [2024-12-10 12:38:29.454792] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.492 12:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.397 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:09.397 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:09.397 00:30:09.397 real 0m12.876s 00:30:09.397 user 0m17.654s 00:30:09.397 sys 0m6.337s 00:30:09.397 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:09.397 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:09.397 ************************************ 00:30:09.397 END TEST nvmf_host_management 00:30:09.397 ************************************ 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:09.657 ************************************ 00:30:09.657 START TEST nvmf_lvol 00:30:09.657 ************************************ 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:09.657 * Looking for test storage... 00:30:09.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:09.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.657 --rc genhtml_branch_coverage=1 00:30:09.657 --rc genhtml_function_coverage=1 00:30:09.657 --rc genhtml_legend=1 00:30:09.657 --rc geninfo_all_blocks=1 00:30:09.657 --rc geninfo_unexecuted_blocks=1 00:30:09.657 00:30:09.657 ' 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:09.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.657 --rc genhtml_branch_coverage=1 00:30:09.657 --rc genhtml_function_coverage=1 00:30:09.657 --rc genhtml_legend=1 00:30:09.657 --rc geninfo_all_blocks=1 00:30:09.657 --rc geninfo_unexecuted_blocks=1 00:30:09.657 00:30:09.657 ' 00:30:09.657 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:09.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.657 --rc genhtml_branch_coverage=1 00:30:09.657 --rc genhtml_function_coverage=1 00:30:09.657 --rc genhtml_legend=1 00:30:09.657 --rc geninfo_all_blocks=1 00:30:09.657 --rc geninfo_unexecuted_blocks=1 00:30:09.657 00:30:09.657 ' 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:09.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.658 --rc genhtml_branch_coverage=1 00:30:09.658 --rc genhtml_function_coverage=1 00:30:09.658 --rc genhtml_legend=1 00:30:09.658 --rc geninfo_all_blocks=1 00:30:09.658 --rc geninfo_unexecuted_blocks=1 00:30:09.658 00:30:09.658 ' 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.658 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.918 12:38:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.488 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:16.488 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:16.489 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:16.489 Found net devices under 0000:86:00.0: cvl_0_0 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:16.489 Found net devices under 0000:86:00.1: cvl_0_1 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:16.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:30:16.489 00:30:16.489 --- 10.0.0.2 ping statistics --- 00:30:16.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.489 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:30:16.489 00:30:16.489 --- 10.0.0.1 ping statistics --- 00:30:16.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.489 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1818754 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1818754 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1818754 ']' 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.489 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.489 [2024-12-10 12:38:37.772332] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:16.489 [2024-12-10 12:38:37.773243] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:30:16.489 [2024-12-10 12:38:37.773276] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.489 [2024-12-10 12:38:37.850590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:16.489 [2024-12-10 12:38:37.891627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.489 [2024-12-10 12:38:37.891664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.489 [2024-12-10 12:38:37.891671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.489 [2024-12-10 12:38:37.891677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.489 [2024-12-10 12:38:37.891682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.490 [2024-12-10 12:38:37.892948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.490 [2024-12-10 12:38:37.893060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.490 [2024-12-10 12:38:37.893062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.490 [2024-12-10 12:38:37.961089] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:16.490 [2024-12-10 12:38:37.962007] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:16.490 [2024-12-10 12:38:37.962024] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:16.490 [2024-12-10 12:38:37.962226] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:16.490 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.490 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:16.490 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:16.490 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:16.490 12:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.490 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.490 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:16.490 [2024-12-10 12:38:38.209863] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.490 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:16.490 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:16.490 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:16.748 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:16.748 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:16.748 12:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:17.007 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=442a01d5-448e-4fb6-9fdd-14c5ca49d355 00:30:17.007 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 442a01d5-448e-4fb6-9fdd-14c5ca49d355 lvol 20 00:30:17.266 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=33e55b96-6f33-4fe7-9da8-d68d7a6c3eca 00:30:17.266 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:17.526 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 33e55b96-6f33-4fe7-9da8-d68d7a6c3eca 00:30:17.526 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:17.784 [2024-12-10 12:38:39.817750] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.784 12:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:18.042 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1819143 00:30:18.042 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:18.042 12:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:18.978 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_snapshot 33e55b96-6f33-4fe7-9da8-d68d7a6c3eca MY_SNAPSHOT 00:30:19.237 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=43ff3c7b-18eb-480b-ab64-90dc0ad6f9bb 00:30:19.237 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_resize 33e55b96-6f33-4fe7-9da8-d68d7a6c3eca 30 00:30:19.495 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_clone 43ff3c7b-18eb-480b-ab64-90dc0ad6f9bb MY_CLONE 00:30:19.754 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=55f8a433-3ec1-4a2e-b7d7-515df88799f3 00:30:19.754 12:38:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_inflate 55f8a433-3ec1-4a2e-b7d7-515df88799f3 00:30:20.321 12:38:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1819143 00:30:28.437 Initializing NVMe Controllers 00:30:28.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:28.437 Controller IO queue size 128, less than required. 00:30:28.437 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:28.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:28.437 Initialization complete. Launching workers. 00:30:28.437 ======================================================== 00:30:28.437 Latency(us) 00:30:28.437 Device Information : IOPS MiB/s Average min max 00:30:28.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12191.90 47.62 10501.92 3648.96 55779.45 00:30:28.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12334.50 48.18 10375.44 1798.85 72021.31 00:30:28.437 ======================================================== 00:30:28.437 Total : 24526.40 95.81 10438.31 1798.85 72021.31 00:30:28.437 00:30:28.437 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:28.696 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 33e55b96-6f33-4fe7-9da8-d68d7a6c3eca 00:30:28.956 12:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 442a01d5-448e-4fb6-9fdd-14c5ca49d355 00:30:28.956 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:28.956 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:28.956 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:28.956 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:28.956 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:28.956 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:28.956 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:28.956 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:28.956 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.215 rmmod nvme_tcp 00:30:29.215 rmmod nvme_fabrics 00:30:29.215 rmmod nvme_keyring 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1818754 ']' 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1818754 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1818754 ']' 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1818754 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1818754 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1818754' 00:30:29.215 killing process with pid 1818754 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1818754 00:30:29.215 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1818754 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.474 12:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.379 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:31.379 00:30:31.379 real 0m21.874s 00:30:31.379 user 0m55.912s 00:30:31.379 sys 0m9.767s 00:30:31.379 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:31.379 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:31.379 ************************************ 00:30:31.379 END TEST nvmf_lvol 00:30:31.379 ************************************ 00:30:31.379 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:31.379 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:31.379 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:31.379 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:31.638 ************************************ 00:30:31.638 START TEST nvmf_lvs_grow 00:30:31.639 ************************************ 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:31.639 * Looking for test storage... 00:30:31.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:31.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.639 --rc genhtml_branch_coverage=1 00:30:31.639 --rc genhtml_function_coverage=1 00:30:31.639 --rc genhtml_legend=1 00:30:31.639 --rc geninfo_all_blocks=1 00:30:31.639 --rc geninfo_unexecuted_blocks=1 00:30:31.639 00:30:31.639 ' 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:31.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.639 --rc genhtml_branch_coverage=1 00:30:31.639 --rc genhtml_function_coverage=1 00:30:31.639 --rc genhtml_legend=1 00:30:31.639 --rc geninfo_all_blocks=1 00:30:31.639 --rc geninfo_unexecuted_blocks=1 00:30:31.639 00:30:31.639 ' 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:31.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.639 --rc genhtml_branch_coverage=1 00:30:31.639 --rc genhtml_function_coverage=1 00:30:31.639 --rc genhtml_legend=1 00:30:31.639 --rc geninfo_all_blocks=1 00:30:31.639 --rc geninfo_unexecuted_blocks=1 00:30:31.639 00:30:31.639 ' 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:31.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:31.639 --rc genhtml_branch_coverage=1 00:30:31.639 --rc genhtml_function_coverage=1 00:30:31.639 --rc genhtml_legend=1 00:30:31.639 --rc geninfo_all_blocks=1 00:30:31.639 --rc geninfo_unexecuted_blocks=1 00:30:31.639 00:30:31.639 ' 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:31.639 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:31.640 12:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:38.211 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:38.211 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:38.211 Found net devices under 0000:86:00.0: cvl_0_0 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:38.211 Found net devices under 0000:86:00.1: cvl_0_1 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:38.211 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:38.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:38.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:30:38.212 00:30:38.212 --- 10.0.0.2 ping statistics --- 00:30:38.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.212 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:38.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:38.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:30:38.212 00:30:38.212 --- 10.0.0.1 ping statistics --- 00:30:38.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:38.212 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1824499 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1824499 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1824499 ']' 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:38.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.212 [2024-12-10 12:38:59.751241] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:38.212 [2024-12-10 12:38:59.752151] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:30:38.212 [2024-12-10 12:38:59.752189] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:38.212 [2024-12-10 12:38:59.832822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.212 [2024-12-10 12:38:59.875596] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:38.212 [2024-12-10 12:38:59.875632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:38.212 [2024-12-10 12:38:59.875640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:38.212 [2024-12-10 12:38:59.875647] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:38.212 [2024-12-10 12:38:59.875653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:38.212 [2024-12-10 12:38:59.876162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.212 [2024-12-10 12:38:59.943311] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:38.212 [2024-12-10 12:38:59.943518] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:38.212 12:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:38.212 [2024-12-10 12:39:00.184827] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.212 ************************************ 00:30:38.212 START TEST lvs_grow_clean 00:30:38.212 ************************************ 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:38.212 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:38.472 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:38.472 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:38.730 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3b5c19de-3192-4c71-a968-93aa219ed254 00:30:38.730 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:38.730 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:38.730 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:38.730 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:38.730 12:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 3b5c19de-3192-4c71-a968-93aa219ed254 lvol 150 00:30:38.989 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6389053e-ab04-4073-99d2-5424060c3281 00:30:38.989 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:38.989 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:39.248 [2024-12-10 12:39:01.256556] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:39.248 [2024-12-10 12:39:01.256682] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:39.248 true 00:30:39.248 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:39.248 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:39.522 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:39.522 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:39.522 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6389053e-ab04-4073-99d2-5424060c3281 00:30:39.843 12:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.114 [2024-12-10 12:39:02.013033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1824949 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1824949 /var/tmp/bdevperf.sock 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1824949 ']' 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:40.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.114 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:40.114 [2024-12-10 12:39:02.277791] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:30:40.114 [2024-12-10 12:39:02.277841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1824949 ] 00:30:40.373 [2024-12-10 12:39:02.353620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.373 [2024-12-10 12:39:02.394531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.373 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.373 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:40.373 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:40.941 Nvme0n1 00:30:40.941 12:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:40.941 [ 00:30:40.941 { 00:30:40.941 "name": "Nvme0n1", 00:30:40.941 "aliases": [ 00:30:40.941 "6389053e-ab04-4073-99d2-5424060c3281" 00:30:40.941 ], 00:30:40.941 "product_name": "NVMe disk", 00:30:40.941 "block_size": 4096, 00:30:40.941 "num_blocks": 38912, 00:30:40.941 "uuid": "6389053e-ab04-4073-99d2-5424060c3281", 00:30:40.941 "numa_id": 1, 00:30:40.941 "assigned_rate_limits": { 00:30:40.941 "rw_ios_per_sec": 0, 00:30:40.941 "rw_mbytes_per_sec": 0, 00:30:40.941 "r_mbytes_per_sec": 0, 00:30:40.941 "w_mbytes_per_sec": 0 00:30:40.941 }, 00:30:40.941 "claimed": false, 00:30:40.941 "zoned": false, 00:30:40.941 "supported_io_types": { 00:30:40.941 "read": true, 00:30:40.941 "write": true, 00:30:40.942 "unmap": true, 00:30:40.942 "flush": true, 00:30:40.942 "reset": true, 00:30:40.942 "nvme_admin": true, 00:30:40.942 "nvme_io": true, 00:30:40.942 "nvme_io_md": false, 00:30:40.942 "write_zeroes": true, 00:30:40.942 "zcopy": false, 00:30:40.942 "get_zone_info": false, 00:30:40.942 "zone_management": false, 00:30:40.942 "zone_append": false, 00:30:40.942 "compare": true, 00:30:40.942 "compare_and_write": true, 00:30:40.942 "abort": true, 00:30:40.942 "seek_hole": false, 00:30:40.942 "seek_data": false, 00:30:40.942 "copy": true, 00:30:40.942 "nvme_iov_md": false 00:30:40.942 }, 00:30:40.942 "memory_domains": [ 00:30:40.942 { 00:30:40.942 "dma_device_id": "system", 00:30:40.942 "dma_device_type": 1 00:30:40.942 } 00:30:40.942 ], 00:30:40.942 "driver_specific": { 00:30:40.942 "nvme": [ 00:30:40.942 { 00:30:40.942 "trid": { 00:30:40.942 "trtype": "TCP", 00:30:40.942 "adrfam": "IPv4", 00:30:40.942 "traddr": "10.0.0.2", 00:30:40.942 "trsvcid": "4420", 00:30:40.942 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:40.942 }, 00:30:40.942 "ctrlr_data": { 00:30:40.942 "cntlid": 1, 00:30:40.942 "vendor_id": "0x8086", 00:30:40.942 "model_number": "SPDK bdev Controller", 00:30:40.942 "serial_number": "SPDK0", 00:30:40.942 "firmware_revision": "25.01", 00:30:40.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.942 "oacs": { 00:30:40.942 "security": 0, 00:30:40.942 "format": 0, 00:30:40.942 "firmware": 0, 00:30:40.942 "ns_manage": 0 00:30:40.942 }, 00:30:40.942 "multi_ctrlr": true, 00:30:40.942 "ana_reporting": false 00:30:40.942 }, 00:30:40.942 "vs": { 00:30:40.942 "nvme_version": "1.3" 00:30:40.942 }, 00:30:40.942 "ns_data": { 00:30:40.942 "id": 1, 00:30:40.942 "can_share": true 00:30:40.942 } 00:30:40.942 } 00:30:40.942 ], 00:30:40.942 "mp_policy": "active_passive" 00:30:40.942 } 00:30:40.942 } 00:30:40.942 ] 00:30:40.942 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1825139 00:30:40.942 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:40.942 12:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:41.200 Running I/O for 10 seconds... 00:30:42.137 Latency(us) 00:30:42.137 [2024-12-10T11:39:04.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.137 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:42.137 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:30:42.137 [2024-12-10T11:39:04.305Z] =================================================================================================================== 00:30:42.137 [2024-12-10T11:39:04.305Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:30:42.137 00:30:43.073 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:43.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.073 Nvme0n1 : 2.00 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:30:43.073 [2024-12-10T11:39:05.241Z] =================================================================================================================== 00:30:43.073 [2024-12-10T11:39:05.241Z] Total : 22606.00 88.30 0.00 0.00 0.00 0.00 0.00 00:30:43.073 00:30:43.331 true 00:30:43.331 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:43.331 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:43.331 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:43.331 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:43.331 12:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1825139 00:30:44.267 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.267 Nvme0n1 : 3.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:44.267 [2024-12-10T11:39:06.435Z] =================================================================================================================== 00:30:44.267 [2024-12-10T11:39:06.435Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:30:44.267 00:30:45.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:45.203 Nvme0n1 : 4.00 22812.50 89.11 0.00 0.00 0.00 0.00 0.00 00:30:45.203 [2024-12-10T11:39:07.371Z] =================================================================================================================== 00:30:45.203 [2024-12-10T11:39:07.371Z] Total : 22812.50 89.11 0.00 0.00 0.00 0.00 0.00 00:30:45.203 00:30:46.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.140 Nvme0n1 : 5.00 22895.20 89.43 0.00 0.00 0.00 0.00 0.00 00:30:46.140 [2024-12-10T11:39:08.308Z] =================================================================================================================== 00:30:46.140 [2024-12-10T11:39:08.308Z] Total : 22895.20 89.43 0.00 0.00 0.00 0.00 0.00 00:30:46.140 00:30:47.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.076 Nvme0n1 : 6.00 22952.83 89.66 0.00 0.00 0.00 0.00 0.00 00:30:47.076 [2024-12-10T11:39:09.244Z] =================================================================================================================== 00:30:47.076 [2024-12-10T11:39:09.244Z] Total : 22952.83 89.66 0.00 0.00 0.00 0.00 0.00 00:30:47.076 00:30:48.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.453 Nvme0n1 : 7.00 22975.86 89.75 0.00 0.00 0.00 0.00 0.00 00:30:48.453 [2024-12-10T11:39:10.621Z] =================================================================================================================== 00:30:48.453 [2024-12-10T11:39:10.621Z] Total : 22975.86 89.75 0.00 0.00 0.00 0.00 0.00 00:30:48.453 00:30:49.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:49.389 Nvme0n1 : 8.00 23009.00 89.88 0.00 0.00 0.00 0.00 0.00 00:30:49.389 [2024-12-10T11:39:11.557Z] =================================================================================================================== 00:30:49.389 [2024-12-10T11:39:11.557Z] Total : 23009.00 89.88 0.00 0.00 0.00 0.00 0.00 00:30:49.389 00:30:50.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.325 Nvme0n1 : 9.00 23034.78 89.98 0.00 0.00 0.00 0.00 0.00 00:30:50.325 [2024-12-10T11:39:12.493Z] =================================================================================================================== 00:30:50.325 [2024-12-10T11:39:12.493Z] Total : 23034.78 89.98 0.00 0.00 0.00 0.00 0.00 00:30:50.325 00:30:51.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.262 Nvme0n1 : 10.00 23055.40 90.06 0.00 0.00 0.00 0.00 0.00 00:30:51.262 [2024-12-10T11:39:13.430Z] =================================================================================================================== 00:30:51.262 [2024-12-10T11:39:13.430Z] Total : 23055.40 90.06 0.00 0.00 0.00 0.00 0.00 00:30:51.262 00:30:51.262 00:30:51.262 Latency(us) 00:30:51.262 [2024-12-10T11:39:13.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.262 Nvme0n1 : 10.00 23062.21 90.09 0.00 0.00 5547.21 3219.81 28038.01 00:30:51.262 [2024-12-10T11:39:13.430Z] =================================================================================================================== 00:30:51.262 [2024-12-10T11:39:13.430Z] Total : 23062.21 90.09 0.00 0.00 5547.21 3219.81 28038.01 00:30:51.262 { 00:30:51.262 "results": [ 00:30:51.262 { 00:30:51.262 "job": "Nvme0n1", 00:30:51.262 "core_mask": "0x2", 00:30:51.262 "workload": "randwrite", 00:30:51.262 "status": "finished", 00:30:51.262 "queue_depth": 128, 00:30:51.262 "io_size": 4096, 00:30:51.262 "runtime": 10.002597, 00:30:51.262 "iops": 23062.21074386982, 00:30:51.262 "mibps": 90.08676071824148, 00:30:51.262 "io_failed": 0, 00:30:51.262 "io_timeout": 0, 00:30:51.262 "avg_latency_us": 5547.213857797088, 00:30:51.262 "min_latency_us": 3219.8121739130434, 00:30:51.262 "max_latency_us": 28038.01043478261 00:30:51.262 } 00:30:51.262 ], 00:30:51.262 "core_count": 1 00:30:51.262 } 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1824949 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1824949 ']' 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1824949 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1824949 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1824949' 00:30:51.262 killing process with pid 1824949 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1824949 00:30:51.262 Received shutdown signal, test time was about 10.000000 seconds 00:30:51.262 00:30:51.262 Latency(us) 00:30:51.262 [2024-12-10T11:39:13.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.262 [2024-12-10T11:39:13.430Z] =================================================================================================================== 00:30:51.262 [2024-12-10T11:39:13.430Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:51.262 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1824949 00:30:51.521 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:51.521 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:51.780 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:51.780 12:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:52.039 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:52.039 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:52.039 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:52.299 [2024-12-10 12:39:14.216648] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:30:52.299 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:52.299 request: 00:30:52.299 { 00:30:52.299 "uuid": "3b5c19de-3192-4c71-a968-93aa219ed254", 00:30:52.299 "method": "bdev_lvol_get_lvstores", 00:30:52.299 "req_id": 1 00:30:52.299 } 00:30:52.299 Got JSON-RPC error response 00:30:52.299 response: 00:30:52.299 { 00:30:52.299 "code": -19, 00:30:52.299 "message": "No such device" 00:30:52.299 } 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:52.558 aio_bdev 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6389053e-ab04-4073-99d2-5424060c3281 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=6389053e-ab04-4073-99d2-5424060c3281 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:52.558 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:52.817 12:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 6389053e-ab04-4073-99d2-5424060c3281 -t 2000 00:30:53.079 [ 00:30:53.079 { 00:30:53.079 "name": "6389053e-ab04-4073-99d2-5424060c3281", 00:30:53.079 "aliases": [ 00:30:53.079 "lvs/lvol" 00:30:53.079 ], 00:30:53.079 "product_name": "Logical Volume", 00:30:53.079 "block_size": 4096, 00:30:53.079 "num_blocks": 38912, 00:30:53.079 "uuid": "6389053e-ab04-4073-99d2-5424060c3281", 00:30:53.079 "assigned_rate_limits": { 00:30:53.079 "rw_ios_per_sec": 0, 00:30:53.079 "rw_mbytes_per_sec": 0, 00:30:53.079 "r_mbytes_per_sec": 0, 00:30:53.079 "w_mbytes_per_sec": 0 00:30:53.079 }, 00:30:53.079 "claimed": false, 00:30:53.079 "zoned": false, 00:30:53.079 "supported_io_types": { 00:30:53.079 "read": true, 00:30:53.079 "write": true, 00:30:53.080 "unmap": true, 00:30:53.080 "flush": false, 00:30:53.080 "reset": true, 00:30:53.080 "nvme_admin": false, 00:30:53.080 "nvme_io": false, 00:30:53.080 "nvme_io_md": false, 00:30:53.080 "write_zeroes": true, 00:30:53.080 "zcopy": false, 00:30:53.080 "get_zone_info": false, 00:30:53.080 "zone_management": false, 00:30:53.080 "zone_append": false, 00:30:53.080 "compare": false, 00:30:53.080 "compare_and_write": false, 00:30:53.080 "abort": false, 00:30:53.080 "seek_hole": true, 00:30:53.080 "seek_data": true, 00:30:53.080 "copy": false, 00:30:53.080 "nvme_iov_md": false 00:30:53.080 }, 00:30:53.080 "driver_specific": { 00:30:53.080 "lvol": { 00:30:53.080 "lvol_store_uuid": "3b5c19de-3192-4c71-a968-93aa219ed254", 00:30:53.080 "base_bdev": "aio_bdev", 00:30:53.080 "thin_provision": false, 00:30:53.080 "num_allocated_clusters": 38, 00:30:53.080 "snapshot": false, 00:30:53.080 "clone": false, 00:30:53.080 "esnap_clone": false 00:30:53.080 } 00:30:53.080 } 00:30:53.080 } 00:30:53.080 ] 00:30:53.080 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:53.080 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:53.080 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:53.080 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:53.080 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:53.080 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:53.339 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:53.339 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 6389053e-ab04-4073-99d2-5424060c3281 00:30:53.597 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b5c19de-3192-4c71-a968-93aa219ed254 00:30:53.856 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:53.856 12:39:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:54.115 00:30:54.115 real 0m15.786s 00:30:54.115 user 0m15.309s 00:30:54.115 sys 0m1.482s 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:54.115 ************************************ 00:30:54.115 END TEST lvs_grow_clean 00:30:54.115 ************************************ 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:54.115 ************************************ 00:30:54.115 START TEST lvs_grow_dirty 00:30:54.115 ************************************ 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:54.115 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:54.374 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:54.374 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:54.374 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:30:54.374 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:30:54.374 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:54.633 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:54.633 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:54.633 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc lvol 150 00:30:54.892 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=47a6784e-e93d-4bd9-9021-dc1c04bc0310 00:30:54.892 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:54.892 12:39:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:55.152 [2024-12-10 12:39:17.108556] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:55.152 [2024-12-10 12:39:17.108683] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:55.152 true 00:30:55.152 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:30:55.152 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:55.411 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:55.411 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:55.411 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 47a6784e-e93d-4bd9-9021-dc1c04bc0310 00:30:55.670 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:55.929 [2024-12-10 12:39:17.852953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.929 12:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:55.929 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1827930 00:30:55.929 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:55.929 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:55.929 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1827930 /var/tmp/bdevperf.sock 00:30:55.929 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1827930 ']' 00:30:55.929 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:55.929 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.929 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:55.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:55.929 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.929 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:56.188 [2024-12-10 12:39:18.104767] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:30:56.188 [2024-12-10 12:39:18.104818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827930 ] 00:30:56.188 [2024-12-10 12:39:18.180290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.188 [2024-12-10 12:39:18.225367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.188 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.188 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:56.188 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:56.757 Nvme0n1 00:30:56.757 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:57.017 [ 00:30:57.017 { 00:30:57.017 "name": "Nvme0n1", 00:30:57.017 "aliases": [ 00:30:57.017 "47a6784e-e93d-4bd9-9021-dc1c04bc0310" 00:30:57.017 ], 00:30:57.017 "product_name": "NVMe disk", 00:30:57.017 "block_size": 4096, 00:30:57.017 "num_blocks": 38912, 00:30:57.017 "uuid": "47a6784e-e93d-4bd9-9021-dc1c04bc0310", 00:30:57.017 "numa_id": 1, 00:30:57.017 "assigned_rate_limits": { 00:30:57.017 "rw_ios_per_sec": 0, 00:30:57.017 "rw_mbytes_per_sec": 0, 00:30:57.017 "r_mbytes_per_sec": 0, 00:30:57.017 "w_mbytes_per_sec": 0 00:30:57.017 }, 00:30:57.017 "claimed": false, 00:30:57.017 "zoned": false, 00:30:57.017 "supported_io_types": { 00:30:57.017 "read": true, 00:30:57.017 "write": true, 00:30:57.017 "unmap": true, 00:30:57.017 "flush": true, 00:30:57.017 "reset": true, 00:30:57.017 "nvme_admin": true, 00:30:57.017 "nvme_io": true, 00:30:57.017 "nvme_io_md": false, 00:30:57.017 "write_zeroes": true, 00:30:57.017 "zcopy": false, 00:30:57.017 "get_zone_info": false, 00:30:57.017 "zone_management": false, 00:30:57.017 "zone_append": false, 00:30:57.017 "compare": true, 00:30:57.017 "compare_and_write": true, 00:30:57.017 "abort": true, 00:30:57.017 "seek_hole": false, 00:30:57.017 "seek_data": false, 00:30:57.017 "copy": true, 00:30:57.017 "nvme_iov_md": false 00:30:57.017 }, 00:30:57.017 "memory_domains": [ 00:30:57.017 { 00:30:57.017 "dma_device_id": "system", 00:30:57.017 "dma_device_type": 1 00:30:57.017 } 00:30:57.017 ], 00:30:57.017 "driver_specific": { 00:30:57.017 "nvme": [ 00:30:57.017 { 00:30:57.017 "trid": { 00:30:57.017 "trtype": "TCP", 00:30:57.017 "adrfam": "IPv4", 00:30:57.017 "traddr": "10.0.0.2", 00:30:57.017 "trsvcid": "4420", 00:30:57.017 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:57.017 }, 00:30:57.017 "ctrlr_data": { 00:30:57.017 "cntlid": 1, 00:30:57.017 "vendor_id": "0x8086", 00:30:57.017 "model_number": "SPDK bdev Controller", 00:30:57.017 "serial_number": "SPDK0", 00:30:57.017 "firmware_revision": "25.01", 00:30:57.017 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.017 "oacs": { 00:30:57.017 "security": 0, 00:30:57.017 "format": 0, 00:30:57.017 "firmware": 0, 00:30:57.017 "ns_manage": 0 00:30:57.017 }, 00:30:57.017 "multi_ctrlr": true, 00:30:57.017 "ana_reporting": false 00:30:57.017 }, 00:30:57.017 "vs": { 00:30:57.017 "nvme_version": "1.3" 00:30:57.017 }, 00:30:57.017 "ns_data": { 00:30:57.017 "id": 1, 00:30:57.017 "can_share": true 00:30:57.017 } 00:30:57.017 } 00:30:57.017 ], 00:30:57.017 "mp_policy": "active_passive" 00:30:57.017 } 00:30:57.017 } 00:30:57.017 ] 00:30:57.017 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1828106 00:30:57.017 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:57.017 12:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:57.017 Running I/O for 10 seconds... 00:30:57.954 Latency(us) 00:30:57.954 [2024-12-10T11:39:20.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:57.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:57.954 Nvme0n1 : 1.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:30:57.954 [2024-12-10T11:39:20.122Z] =================================================================================================================== 00:30:57.954 [2024-12-10T11:39:20.122Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:30:57.954 00:30:58.891 12:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:30:58.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:58.891 Nvme0n1 : 2.00 22288.50 87.06 0.00 0.00 0.00 0.00 0.00 00:30:58.891 [2024-12-10T11:39:21.059Z] =================================================================================================================== 00:30:58.891 [2024-12-10T11:39:21.059Z] Total : 22288.50 87.06 0.00 0.00 0.00 0.00 0.00 00:30:58.891 00:30:59.150 true 00:30:59.150 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:30:59.150 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:59.409 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:59.409 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:59.409 12:39:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1828106 00:30:59.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:59.977 Nvme0n1 : 3.00 22458.00 87.73 0.00 0.00 0.00 0.00 0.00 00:30:59.977 [2024-12-10T11:39:22.145Z] =================================================================================================================== 00:30:59.977 [2024-12-10T11:39:22.145Z] Total : 22458.00 87.73 0.00 0.00 0.00 0.00 0.00 00:30:59.977 00:31:00.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:00.914 Nvme0n1 : 4.00 22602.50 88.29 0.00 0.00 0.00 0.00 0.00 00:31:00.914 [2024-12-10T11:39:23.082Z] =================================================================================================================== 00:31:00.914 [2024-12-10T11:39:23.082Z] Total : 22602.50 88.29 0.00 0.00 0.00 0.00 0.00 00:31:00.914 00:31:02.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.291 Nvme0n1 : 5.00 22698.80 88.67 0.00 0.00 0.00 0.00 0.00 00:31:02.291 [2024-12-10T11:39:24.459Z] =================================================================================================================== 00:31:02.291 [2024-12-10T11:39:24.459Z] Total : 22698.80 88.67 0.00 0.00 0.00 0.00 0.00 00:31:02.291 00:31:03.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:03.228 Nvme0n1 : 6.00 22773.67 88.96 0.00 0.00 0.00 0.00 0.00 00:31:03.228 [2024-12-10T11:39:25.396Z] =================================================================================================================== 00:31:03.228 [2024-12-10T11:39:25.396Z] Total : 22773.67 88.96 0.00 0.00 0.00 0.00 0.00 00:31:03.228 00:31:04.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.164 Nvme0n1 : 7.00 22840.43 89.22 0.00 0.00 0.00 0.00 0.00 00:31:04.164 [2024-12-10T11:39:26.332Z] =================================================================================================================== 00:31:04.164 [2024-12-10T11:39:26.332Z] Total : 22840.43 89.22 0.00 0.00 0.00 0.00 0.00 00:31:04.164 00:31:05.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.101 Nvme0n1 : 8.00 22874.62 89.35 0.00 0.00 0.00 0.00 0.00 00:31:05.101 [2024-12-10T11:39:27.269Z] =================================================================================================================== 00:31:05.101 [2024-12-10T11:39:27.270Z] Total : 22874.62 89.35 0.00 0.00 0.00 0.00 0.00 00:31:05.102 00:31:06.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.039 Nvme0n1 : 9.00 22915.33 89.51 0.00 0.00 0.00 0.00 0.00 00:31:06.039 [2024-12-10T11:39:28.207Z] =================================================================================================================== 00:31:06.039 [2024-12-10T11:39:28.207Z] Total : 22915.33 89.51 0.00 0.00 0.00 0.00 0.00 00:31:06.039 00:31:06.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.976 Nvme0n1 : 10.00 22936.90 89.60 0.00 0.00 0.00 0.00 0.00 00:31:06.976 [2024-12-10T11:39:29.144Z] =================================================================================================================== 00:31:06.976 [2024-12-10T11:39:29.144Z] Total : 22936.90 89.60 0.00 0.00 0.00 0.00 0.00 00:31:06.976 00:31:06.976 00:31:06.976 Latency(us) 00:31:06.976 [2024-12-10T11:39:29.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.976 Nvme0n1 : 10.00 22939.24 89.61 0.00 0.00 5576.89 3219.81 27582.11 00:31:06.976 [2024-12-10T11:39:29.144Z] =================================================================================================================== 00:31:06.976 [2024-12-10T11:39:29.144Z] Total : 22939.24 89.61 0.00 0.00 5576.89 3219.81 27582.11 00:31:06.976 { 00:31:06.976 "results": [ 00:31:06.976 { 00:31:06.976 "job": "Nvme0n1", 00:31:06.976 "core_mask": "0x2", 00:31:06.976 "workload": "randwrite", 00:31:06.976 "status": "finished", 00:31:06.976 "queue_depth": 128, 00:31:06.976 "io_size": 4096, 00:31:06.976 "runtime": 10.00382, 00:31:06.976 "iops": 22939.23721138525, 00:31:06.976 "mibps": 89.60639535697364, 00:31:06.976 "io_failed": 0, 00:31:06.976 "io_timeout": 0, 00:31:06.976 "avg_latency_us": 5576.887690006139, 00:31:06.976 "min_latency_us": 3219.8121739130434, 00:31:06.976 "max_latency_us": 27582.107826086958 00:31:06.976 } 00:31:06.976 ], 00:31:06.976 "core_count": 1 00:31:06.976 } 00:31:06.976 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1827930 00:31:06.976 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1827930 ']' 00:31:06.976 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1827930 00:31:06.976 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:06.976 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:06.976 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1827930 00:31:06.976 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:06.976 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:06.976 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1827930' 00:31:06.976 killing process with pid 1827930 00:31:06.976 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1827930 00:31:06.977 Received shutdown signal, test time was about 10.000000 seconds 00:31:06.977 00:31:06.977 Latency(us) 00:31:06.977 [2024-12-10T11:39:29.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.977 [2024-12-10T11:39:29.145Z] =================================================================================================================== 00:31:06.977 [2024-12-10T11:39:29.145Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:06.977 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1827930 00:31:07.236 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:07.495 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:07.755 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:31:07.755 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:07.755 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:07.755 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:07.755 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1824499 00:31:07.755 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1824499 00:31:07.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1824499 Killed "${NVMF_APP[@]}" "$@" 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1829938 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1829938 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1829938 ']' 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.014 12:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:08.014 [2024-12-10 12:39:29.968299] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:08.014 [2024-12-10 12:39:29.969235] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:08.014 [2024-12-10 12:39:29.969273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.014 [2024-12-10 12:39:30.052571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.014 [2024-12-10 12:39:30.092707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.014 [2024-12-10 12:39:30.092744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.014 [2024-12-10 12:39:30.092752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.014 [2024-12-10 12:39:30.092758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.014 [2024-12-10 12:39:30.092763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.014 [2024-12-10 12:39:30.093296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.014 [2024-12-10 12:39:30.161643] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:08.014 [2024-12-10 12:39:30.161843] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:08.274 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:08.274 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:08.274 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:08.274 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:08.274 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:08.274 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.274 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:08.274 [2024-12-10 12:39:30.406676] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:08.274 [2024-12-10 12:39:30.406877] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:08.274 [2024-12-10 12:39:30.406961] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:08.533 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:08.533 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 47a6784e-e93d-4bd9-9021-dc1c04bc0310 00:31:08.533 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=47a6784e-e93d-4bd9-9021-dc1c04bc0310 00:31:08.533 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:08.533 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:08.533 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:08.533 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:08.533 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:08.533 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 47a6784e-e93d-4bd9-9021-dc1c04bc0310 -t 2000 00:31:08.793 [ 00:31:08.793 { 00:31:08.793 "name": "47a6784e-e93d-4bd9-9021-dc1c04bc0310", 00:31:08.793 "aliases": [ 00:31:08.793 "lvs/lvol" 00:31:08.793 ], 00:31:08.793 "product_name": "Logical Volume", 00:31:08.793 "block_size": 4096, 00:31:08.793 "num_blocks": 38912, 00:31:08.793 "uuid": "47a6784e-e93d-4bd9-9021-dc1c04bc0310", 00:31:08.793 "assigned_rate_limits": { 00:31:08.793 "rw_ios_per_sec": 0, 00:31:08.793 "rw_mbytes_per_sec": 0, 00:31:08.793 "r_mbytes_per_sec": 0, 00:31:08.793 "w_mbytes_per_sec": 0 00:31:08.793 }, 00:31:08.793 "claimed": false, 00:31:08.793 "zoned": false, 00:31:08.793 "supported_io_types": { 00:31:08.793 "read": true, 00:31:08.793 "write": true, 00:31:08.793 "unmap": true, 00:31:08.793 "flush": false, 00:31:08.793 "reset": true, 00:31:08.793 "nvme_admin": false, 00:31:08.793 "nvme_io": false, 00:31:08.793 "nvme_io_md": false, 00:31:08.793 "write_zeroes": true, 00:31:08.793 "zcopy": false, 00:31:08.793 "get_zone_info": false, 00:31:08.793 "zone_management": false, 00:31:08.793 "zone_append": false, 00:31:08.793 "compare": false, 00:31:08.793 "compare_and_write": false, 00:31:08.793 "abort": false, 00:31:08.793 "seek_hole": true, 00:31:08.793 "seek_data": true, 00:31:08.793 "copy": false, 00:31:08.793 "nvme_iov_md": false 00:31:08.793 }, 00:31:08.793 "driver_specific": { 00:31:08.793 "lvol": { 00:31:08.793 "lvol_store_uuid": "43fd7789-25f5-4604-9cbd-ee4f549ddcdc", 00:31:08.793 "base_bdev": "aio_bdev", 00:31:08.793 "thin_provision": false, 00:31:08.793 "num_allocated_clusters": 38, 00:31:08.793 "snapshot": false, 00:31:08.793 "clone": false, 00:31:08.793 "esnap_clone": false 00:31:08.793 } 00:31:08.793 } 00:31:08.793 } 00:31:08.793 ] 00:31:08.793 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:08.793 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:31:08.793 12:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:09.050 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:09.050 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:31:09.050 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:09.308 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:09.308 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:09.308 [2024-12-10 12:39:31.405757] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:09.308 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:31:09.308 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:09.308 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:31:09.308 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:31:09.309 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:09.309 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:31:09.309 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:09.309 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:31:09.309 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:09.309 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:31:09.309 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:31:09.309 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:31:09.568 request: 00:31:09.568 { 00:31:09.568 "uuid": "43fd7789-25f5-4604-9cbd-ee4f549ddcdc", 00:31:09.568 "method": "bdev_lvol_get_lvstores", 00:31:09.568 "req_id": 1 00:31:09.568 } 00:31:09.568 Got JSON-RPC error response 00:31:09.568 response: 00:31:09.568 { 00:31:09.568 "code": -19, 00:31:09.568 "message": "No such device" 00:31:09.568 } 00:31:09.568 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:09.568 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:09.568 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:09.568 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:09.568 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:09.827 aio_bdev 00:31:09.827 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 47a6784e-e93d-4bd9-9021-dc1c04bc0310 00:31:09.827 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=47a6784e-e93d-4bd9-9021-dc1c04bc0310 00:31:09.827 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:09.827 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:09.827 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:09.827 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:09.827 12:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:10.086 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 47a6784e-e93d-4bd9-9021-dc1c04bc0310 -t 2000 00:31:10.086 [ 00:31:10.086 { 00:31:10.086 "name": "47a6784e-e93d-4bd9-9021-dc1c04bc0310", 00:31:10.086 "aliases": [ 00:31:10.086 "lvs/lvol" 00:31:10.086 ], 00:31:10.086 "product_name": "Logical Volume", 00:31:10.086 "block_size": 4096, 00:31:10.086 "num_blocks": 38912, 00:31:10.086 "uuid": "47a6784e-e93d-4bd9-9021-dc1c04bc0310", 00:31:10.086 "assigned_rate_limits": { 00:31:10.086 "rw_ios_per_sec": 0, 00:31:10.086 "rw_mbytes_per_sec": 0, 00:31:10.086 "r_mbytes_per_sec": 0, 00:31:10.086 "w_mbytes_per_sec": 0 00:31:10.086 }, 00:31:10.086 "claimed": false, 00:31:10.086 "zoned": false, 00:31:10.086 "supported_io_types": { 00:31:10.086 "read": true, 00:31:10.086 "write": true, 00:31:10.086 "unmap": true, 00:31:10.086 "flush": false, 00:31:10.086 "reset": true, 00:31:10.086 "nvme_admin": false, 00:31:10.086 "nvme_io": false, 00:31:10.086 "nvme_io_md": false, 00:31:10.086 "write_zeroes": true, 00:31:10.086 "zcopy": false, 00:31:10.086 "get_zone_info": false, 00:31:10.086 "zone_management": false, 00:31:10.086 "zone_append": false, 00:31:10.086 "compare": false, 00:31:10.086 "compare_and_write": false, 00:31:10.086 "abort": false, 00:31:10.086 "seek_hole": true, 00:31:10.086 "seek_data": true, 00:31:10.086 "copy": false, 00:31:10.087 "nvme_iov_md": false 00:31:10.087 }, 00:31:10.087 "driver_specific": { 00:31:10.087 "lvol": { 00:31:10.087 "lvol_store_uuid": "43fd7789-25f5-4604-9cbd-ee4f549ddcdc", 00:31:10.087 "base_bdev": "aio_bdev", 00:31:10.087 "thin_provision": false, 00:31:10.087 "num_allocated_clusters": 38, 00:31:10.087 "snapshot": false, 00:31:10.087 "clone": false, 00:31:10.087 "esnap_clone": false 00:31:10.087 } 00:31:10.087 } 00:31:10.087 } 00:31:10.087 ] 00:31:10.345 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:10.345 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:31:10.345 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:10.345 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:10.345 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:31:10.345 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:10.604 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:10.604 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 47a6784e-e93d-4bd9-9021-dc1c04bc0310 00:31:10.863 12:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 43fd7789-25f5-4604-9cbd-ee4f549ddcdc 00:31:11.122 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:11.122 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:31:11.382 00:31:11.382 real 0m17.197s 00:31:11.382 user 0m34.598s 00:31:11.382 sys 0m3.861s 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:11.382 ************************************ 00:31:11.382 END TEST lvs_grow_dirty 00:31:11.382 ************************************ 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:11.382 nvmf_trace.0 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:11.382 rmmod nvme_tcp 00:31:11.382 rmmod nvme_fabrics 00:31:11.382 rmmod nvme_keyring 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1829938 ']' 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1829938 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1829938 ']' 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1829938 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1829938 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1829938' 00:31:11.382 killing process with pid 1829938 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1829938 00:31:11.382 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1829938 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.641 12:39:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.179 00:31:14.179 real 0m42.173s 00:31:14.179 user 0m52.458s 00:31:14.179 sys 0m10.205s 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:14.179 ************************************ 00:31:14.179 END TEST nvmf_lvs_grow 00:31:14.179 ************************************ 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.179 ************************************ 00:31:14.179 START TEST nvmf_bdev_io_wait 00:31:14.179 ************************************ 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:14.179 * Looking for test storage... 00:31:14.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:14.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.179 --rc genhtml_branch_coverage=1 00:31:14.179 --rc genhtml_function_coverage=1 00:31:14.179 --rc genhtml_legend=1 00:31:14.179 --rc geninfo_all_blocks=1 00:31:14.179 --rc geninfo_unexecuted_blocks=1 00:31:14.179 00:31:14.179 ' 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:14.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.179 --rc genhtml_branch_coverage=1 00:31:14.179 --rc genhtml_function_coverage=1 00:31:14.179 --rc genhtml_legend=1 00:31:14.179 --rc geninfo_all_blocks=1 00:31:14.179 --rc geninfo_unexecuted_blocks=1 00:31:14.179 00:31:14.179 ' 00:31:14.179 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:14.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.180 --rc genhtml_branch_coverage=1 00:31:14.180 --rc genhtml_function_coverage=1 00:31:14.180 --rc genhtml_legend=1 00:31:14.180 --rc geninfo_all_blocks=1 00:31:14.180 --rc geninfo_unexecuted_blocks=1 00:31:14.180 00:31:14.180 ' 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:14.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.180 --rc genhtml_branch_coverage=1 00:31:14.180 --rc genhtml_function_coverage=1 00:31:14.180 --rc genhtml_legend=1 00:31:14.180 --rc geninfo_all_blocks=1 00:31:14.180 --rc geninfo_unexecuted_blocks=1 00:31:14.180 00:31:14.180 ' 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.180 12:39:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:14.180 12:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:20.752 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:20.752 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:20.752 Found net devices under 0000:86:00.0: cvl_0_0 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:20.752 Found net devices under 0000:86:00.1: cvl_0_1 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:31:20.752 00:31:20.752 --- 10.0.0.2 ping statistics --- 00:31:20.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.752 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:31:20.752 00:31:20.752 --- 10.0.0.1 ping statistics --- 00:31:20.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.752 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1833989 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1833989 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1833989 ']' 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.752 12:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 [2024-12-10 12:39:41.978328] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.752 [2024-12-10 12:39:41.979258] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:20.752 [2024-12-10 12:39:41.979292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.752 [2024-12-10 12:39:42.058725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.752 [2024-12-10 12:39:42.101265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.752 [2024-12-10 12:39:42.101302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.752 [2024-12-10 12:39:42.101309] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.752 [2024-12-10 12:39:42.101315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.752 [2024-12-10 12:39:42.101320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.752 [2024-12-10 12:39:42.102705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.752 [2024-12-10 12:39:42.102813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.752 [2024-12-10 12:39:42.102922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.752 [2024-12-10 12:39:42.102923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.752 [2024-12-10 12:39:42.103192] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 [2024-12-10 12:39:42.242674] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:20.752 [2024-12-10 12:39:42.243468] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:20.752 [2024-12-10 12:39:42.243626] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:20.752 [2024-12-10 12:39:42.243752] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 [2024-12-10 12:39:42.255387] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 Malloc0 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.752 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:20.753 [2024-12-10 12:39:42.323849] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1834013 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1834015 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:20.753 { 00:31:20.753 "params": { 00:31:20.753 "name": "Nvme$subsystem", 00:31:20.753 "trtype": "$TEST_TRANSPORT", 00:31:20.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.753 "adrfam": "ipv4", 00:31:20.753 "trsvcid": "$NVMF_PORT", 00:31:20.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.753 "hdgst": ${hdgst:-false}, 00:31:20.753 "ddgst": ${ddgst:-false} 00:31:20.753 }, 00:31:20.753 "method": "bdev_nvme_attach_controller" 00:31:20.753 } 00:31:20.753 EOF 00:31:20.753 )") 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1834017 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:20.753 { 00:31:20.753 "params": { 00:31:20.753 "name": "Nvme$subsystem", 00:31:20.753 "trtype": "$TEST_TRANSPORT", 00:31:20.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.753 "adrfam": "ipv4", 00:31:20.753 "trsvcid": "$NVMF_PORT", 00:31:20.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.753 "hdgst": ${hdgst:-false}, 00:31:20.753 "ddgst": ${ddgst:-false} 00:31:20.753 }, 00:31:20.753 "method": "bdev_nvme_attach_controller" 00:31:20.753 } 00:31:20.753 EOF 00:31:20.753 )") 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1834020 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:20.753 { 00:31:20.753 "params": { 00:31:20.753 "name": "Nvme$subsystem", 00:31:20.753 "trtype": "$TEST_TRANSPORT", 00:31:20.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.753 "adrfam": "ipv4", 00:31:20.753 "trsvcid": "$NVMF_PORT", 00:31:20.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.753 "hdgst": ${hdgst:-false}, 00:31:20.753 "ddgst": ${ddgst:-false} 00:31:20.753 }, 00:31:20.753 "method": "bdev_nvme_attach_controller" 00:31:20.753 } 00:31:20.753 EOF 00:31:20.753 )") 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:20.753 { 00:31:20.753 "params": { 00:31:20.753 "name": "Nvme$subsystem", 00:31:20.753 "trtype": "$TEST_TRANSPORT", 00:31:20.753 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.753 "adrfam": "ipv4", 00:31:20.753 "trsvcid": "$NVMF_PORT", 00:31:20.753 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.753 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.753 "hdgst": ${hdgst:-false}, 00:31:20.753 "ddgst": ${ddgst:-false} 00:31:20.753 }, 00:31:20.753 "method": "bdev_nvme_attach_controller" 00:31:20.753 } 00:31:20.753 EOF 00:31:20.753 )") 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1834013 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:20.753 "params": { 00:31:20.753 "name": "Nvme1", 00:31:20.753 "trtype": "tcp", 00:31:20.753 "traddr": "10.0.0.2", 00:31:20.753 "adrfam": "ipv4", 00:31:20.753 "trsvcid": "4420", 00:31:20.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.753 "hdgst": false, 00:31:20.753 "ddgst": false 00:31:20.753 }, 00:31:20.753 "method": "bdev_nvme_attach_controller" 00:31:20.753 }' 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:20.753 "params": { 00:31:20.753 "name": "Nvme1", 00:31:20.753 "trtype": "tcp", 00:31:20.753 "traddr": "10.0.0.2", 00:31:20.753 "adrfam": "ipv4", 00:31:20.753 "trsvcid": "4420", 00:31:20.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.753 "hdgst": false, 00:31:20.753 "ddgst": false 00:31:20.753 }, 00:31:20.753 "method": "bdev_nvme_attach_controller" 00:31:20.753 }' 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:20.753 "params": { 00:31:20.753 "name": "Nvme1", 00:31:20.753 "trtype": "tcp", 00:31:20.753 "traddr": "10.0.0.2", 00:31:20.753 "adrfam": "ipv4", 00:31:20.753 "trsvcid": "4420", 00:31:20.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.753 "hdgst": false, 00:31:20.753 "ddgst": false 00:31:20.753 }, 00:31:20.753 "method": "bdev_nvme_attach_controller" 00:31:20.753 }' 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:20.753 12:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:20.753 "params": { 00:31:20.753 "name": "Nvme1", 00:31:20.753 "trtype": "tcp", 00:31:20.753 "traddr": "10.0.0.2", 00:31:20.753 "adrfam": "ipv4", 00:31:20.753 "trsvcid": "4420", 00:31:20.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:20.753 "hdgst": false, 00:31:20.753 "ddgst": false 00:31:20.753 }, 00:31:20.753 "method": "bdev_nvme_attach_controller" 00:31:20.753 }' 00:31:20.753 [2024-12-10 12:39:42.375234] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:20.753 [2024-12-10 12:39:42.375284] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:20.753 [2024-12-10 12:39:42.378555] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:20.753 [2024-12-10 12:39:42.378559] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:20.753 [2024-12-10 12:39:42.378602] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 12:39:42.378603] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:20.753 --proc-type=auto ] 00:31:20.753 [2024-12-10 12:39:42.379739] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:20.753 [2024-12-10 12:39:42.379782] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:20.753 [2024-12-10 12:39:42.563999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.753 [2024-12-10 12:39:42.605754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:20.753 [2024-12-10 12:39:42.656289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.753 [2024-12-10 12:39:42.697478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.753 [2024-12-10 12:39:42.705244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:20.753 [2024-12-10 12:39:42.739443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:20.753 [2024-12-10 12:39:42.775373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.753 [2024-12-10 12:39:42.815824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:21.011 Running I/O for 1 seconds... 00:31:21.011 Running I/O for 1 seconds... 00:31:21.011 Running I/O for 1 seconds... 00:31:21.011 Running I/O for 1 seconds... 00:31:21.948 14260.00 IOPS, 55.70 MiB/s 00:31:21.948 Latency(us) 00:31:21.948 [2024-12-10T11:39:44.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.948 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:21.948 Nvme1n1 : 1.01 14321.85 55.94 0.00 0.00 8911.33 1531.55 10428.77 00:31:21.948 [2024-12-10T11:39:44.116Z] =================================================================================================================== 00:31:21.948 [2024-12-10T11:39:44.116Z] Total : 14321.85 55.94 0.00 0.00 8911.33 1531.55 10428.77 00:31:21.948 6671.00 IOPS, 26.06 MiB/s 00:31:21.948 Latency(us) 00:31:21.948 [2024-12-10T11:39:44.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.948 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:21.948 Nvme1n1 : 1.01 6724.80 26.27 0.00 0.00 18904.87 4616.01 26784.28 00:31:21.948 [2024-12-10T11:39:44.116Z] =================================================================================================================== 00:31:21.948 [2024-12-10T11:39:44.116Z] Total : 6724.80 26.27 0.00 0.00 18904.87 4616.01 26784.28 00:31:21.948 237056.00 IOPS, 926.00 MiB/s 00:31:21.948 Latency(us) 00:31:21.948 [2024-12-10T11:39:44.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.948 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:21.948 Nvme1n1 : 1.00 236692.39 924.58 0.00 0.00 538.06 229.73 1538.67 00:31:21.948 [2024-12-10T11:39:44.116Z] =================================================================================================================== 00:31:21.948 [2024-12-10T11:39:44.116Z] Total : 236692.39 924.58 0.00 0.00 538.06 229.73 1538.67 00:31:21.948 6825.00 IOPS, 26.66 MiB/s [2024-12-10T11:39:44.116Z] 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1834015 00:31:21.948 00:31:21.948 Latency(us) 00:31:21.948 [2024-12-10T11:39:44.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.948 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:21.948 Nvme1n1 : 1.01 6918.16 27.02 0.00 0.00 18453.53 4046.14 35788.35 00:31:21.948 [2024-12-10T11:39:44.116Z] =================================================================================================================== 00:31:21.948 [2024-12-10T11:39:44.116Z] Total : 6918.16 27.02 0.00 0.00 18453.53 4046.14 35788.35 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1834017 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1834020 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:22.208 rmmod nvme_tcp 00:31:22.208 rmmod nvme_fabrics 00:31:22.208 rmmod nvme_keyring 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1833989 ']' 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1833989 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1833989 ']' 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1833989 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1833989 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1833989' 00:31:22.208 killing process with pid 1833989 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1833989 00:31:22.208 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1833989 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.468 12:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.373 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:24.373 00:31:24.373 real 0m10.709s 00:31:24.373 user 0m15.058s 00:31:24.373 sys 0m6.378s 00:31:24.373 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:24.373 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:24.373 ************************************ 00:31:24.373 END TEST nvmf_bdev_io_wait 00:31:24.373 ************************************ 00:31:24.632 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:24.632 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:24.632 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:24.632 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:24.632 ************************************ 00:31:24.632 START TEST nvmf_queue_depth 00:31:24.633 ************************************ 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:24.633 * Looking for test storage... 00:31:24.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.633 --rc genhtml_branch_coverage=1 00:31:24.633 --rc genhtml_function_coverage=1 00:31:24.633 --rc genhtml_legend=1 00:31:24.633 --rc geninfo_all_blocks=1 00:31:24.633 --rc geninfo_unexecuted_blocks=1 00:31:24.633 00:31:24.633 ' 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.633 --rc genhtml_branch_coverage=1 00:31:24.633 --rc genhtml_function_coverage=1 00:31:24.633 --rc genhtml_legend=1 00:31:24.633 --rc geninfo_all_blocks=1 00:31:24.633 --rc geninfo_unexecuted_blocks=1 00:31:24.633 00:31:24.633 ' 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.633 --rc genhtml_branch_coverage=1 00:31:24.633 --rc genhtml_function_coverage=1 00:31:24.633 --rc genhtml_legend=1 00:31:24.633 --rc geninfo_all_blocks=1 00:31:24.633 --rc geninfo_unexecuted_blocks=1 00:31:24.633 00:31:24.633 ' 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.633 --rc genhtml_branch_coverage=1 00:31:24.633 --rc genhtml_function_coverage=1 00:31:24.633 --rc genhtml_legend=1 00:31:24.633 --rc geninfo_all_blocks=1 00:31:24.633 --rc geninfo_unexecuted_blocks=1 00:31:24.633 00:31:24.633 ' 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:24.633 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.634 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.634 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.634 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:24.634 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:24.634 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:24.893 12:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:30.315 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:30.315 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:30.315 Found net devices under 0000:86:00.0: cvl_0_0 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:30.315 Found net devices under 0000:86:00.1: cvl_0_1 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.315 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:30.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:31:30.575 00:31:30.575 --- 10.0.0.2 ping statistics --- 00:31:30.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.575 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:31:30.575 00:31:30.575 --- 10.0.0.1 ping statistics --- 00:31:30.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.575 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:30.575 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1837795 00:31:30.576 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1837795 00:31:30.576 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:30.576 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1837795 ']' 00:31:30.576 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.576 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:30.576 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.576 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:30.576 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:30.576 [2024-12-10 12:39:52.717229] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:30.576 [2024-12-10 12:39:52.718230] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:30.576 [2024-12-10 12:39:52.718268] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.835 [2024-12-10 12:39:52.804764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.835 [2024-12-10 12:39:52.844806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.835 [2024-12-10 12:39:52.844841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.835 [2024-12-10 12:39:52.844849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.835 [2024-12-10 12:39:52.844855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.835 [2024-12-10 12:39:52.844860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.835 [2024-12-10 12:39:52.845389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.835 [2024-12-10 12:39:52.913993] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:30.835 [2024-12-10 12:39:52.914205] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:30.835 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:30.835 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:30.835 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:30.835 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:30.835 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:30.836 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.836 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:30.836 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.836 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:30.836 [2024-12-10 12:39:52.978054] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.836 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.836 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:30.836 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.836 12:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:31.095 Malloc0 00:31:31.095 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.095 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:31.095 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.095 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:31.095 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.095 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:31.096 [2024-12-10 12:39:53.054186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1837995 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1837995 /var/tmp/bdevperf.sock 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1837995 ']' 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:31.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.096 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:31.096 [2024-12-10 12:39:53.105823] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:31.096 [2024-12-10 12:39:53.105866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837995 ] 00:31:31.096 [2024-12-10 12:39:53.181507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.096 [2024-12-10 12:39:53.222658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.354 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.354 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:31.354 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:31.354 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.354 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:31.354 NVMe0n1 00:31:31.354 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.354 12:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:31.354 Running I/O for 10 seconds... 00:31:33.669 11770.00 IOPS, 45.98 MiB/s [2024-12-10T11:39:56.773Z] 12054.00 IOPS, 47.09 MiB/s [2024-12-10T11:39:57.710Z] 12135.00 IOPS, 47.40 MiB/s [2024-12-10T11:39:58.647Z] 12165.25 IOPS, 47.52 MiB/s [2024-12-10T11:39:59.583Z] 12205.80 IOPS, 47.68 MiB/s [2024-12-10T11:40:00.959Z] 12251.17 IOPS, 47.86 MiB/s [2024-12-10T11:40:01.895Z] 12239.71 IOPS, 47.81 MiB/s [2024-12-10T11:40:02.831Z] 12268.25 IOPS, 47.92 MiB/s [2024-12-10T11:40:03.767Z] 12270.56 IOPS, 47.93 MiB/s [2024-12-10T11:40:03.767Z] 12280.20 IOPS, 47.97 MiB/s 00:31:41.599 Latency(us) 00:31:41.599 [2024-12-10T11:40:03.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.599 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:41.599 Verification LBA range: start 0x0 length 0x4000 00:31:41.599 NVMe0n1 : 10.07 12300.88 48.05 0.00 0.00 82985.19 19831.76 52428.80 00:31:41.599 [2024-12-10T11:40:03.767Z] =================================================================================================================== 00:31:41.599 [2024-12-10T11:40:03.767Z] Total : 12300.88 48.05 0.00 0.00 82985.19 19831.76 52428.80 00:31:41.599 { 00:31:41.599 "results": [ 00:31:41.599 { 00:31:41.599 "job": "NVMe0n1", 00:31:41.599 "core_mask": "0x1", 00:31:41.599 "workload": "verify", 00:31:41.599 "status": "finished", 00:31:41.599 "verify_range": { 00:31:41.599 "start": 0, 00:31:41.599 "length": 16384 00:31:41.599 }, 00:31:41.599 "queue_depth": 1024, 00:31:41.599 "io_size": 4096, 00:31:41.599 "runtime": 10.066438, 00:31:41.599 "iops": 12300.875443726967, 00:31:41.599 "mibps": 48.050294702058466, 00:31:41.599 "io_failed": 0, 00:31:41.599 "io_timeout": 0, 00:31:41.599 "avg_latency_us": 82985.18533768633, 00:31:41.599 "min_latency_us": 19831.76347826087, 00:31:41.599 "max_latency_us": 52428.8 00:31:41.599 } 00:31:41.599 ], 00:31:41.599 "core_count": 1 00:31:41.599 } 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1837995 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1837995 ']' 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1837995 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1837995 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1837995' 00:31:41.599 killing process with pid 1837995 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1837995 00:31:41.599 Received shutdown signal, test time was about 10.000000 seconds 00:31:41.599 00:31:41.599 Latency(us) 00:31:41.599 [2024-12-10T11:40:03.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.599 [2024-12-10T11:40:03.767Z] =================================================================================================================== 00:31:41.599 [2024-12-10T11:40:03.767Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:41.599 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1837995 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:41.857 rmmod nvme_tcp 00:31:41.857 rmmod nvme_fabrics 00:31:41.857 rmmod nvme_keyring 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1837795 ']' 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1837795 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1837795 ']' 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1837795 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1837795 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1837795' 00:31:41.857 killing process with pid 1837795 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1837795 00:31:41.857 12:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1837795 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.115 12:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.044 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:44.044 00:31:44.044 real 0m19.613s 00:31:44.044 user 0m22.699s 00:31:44.044 sys 0m6.215s 00:31:44.044 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:44.304 ************************************ 00:31:44.304 END TEST nvmf_queue_depth 00:31:44.304 ************************************ 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:44.304 ************************************ 00:31:44.304 START TEST nvmf_target_multipath 00:31:44.304 ************************************ 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:44.304 * Looking for test storage... 00:31:44.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:44.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.304 --rc genhtml_branch_coverage=1 00:31:44.304 --rc genhtml_function_coverage=1 00:31:44.304 --rc genhtml_legend=1 00:31:44.304 --rc geninfo_all_blocks=1 00:31:44.304 --rc geninfo_unexecuted_blocks=1 00:31:44.304 00:31:44.304 ' 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:44.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.304 --rc genhtml_branch_coverage=1 00:31:44.304 --rc genhtml_function_coverage=1 00:31:44.304 --rc genhtml_legend=1 00:31:44.304 --rc geninfo_all_blocks=1 00:31:44.304 --rc geninfo_unexecuted_blocks=1 00:31:44.304 00:31:44.304 ' 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:44.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.304 --rc genhtml_branch_coverage=1 00:31:44.304 --rc genhtml_function_coverage=1 00:31:44.304 --rc genhtml_legend=1 00:31:44.304 --rc geninfo_all_blocks=1 00:31:44.304 --rc geninfo_unexecuted_blocks=1 00:31:44.304 00:31:44.304 ' 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:44.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:44.304 --rc genhtml_branch_coverage=1 00:31:44.304 --rc genhtml_function_coverage=1 00:31:44.304 --rc genhtml_legend=1 00:31:44.304 --rc geninfo_all_blocks=1 00:31:44.304 --rc geninfo_unexecuted_blocks=1 00:31:44.304 00:31:44.304 ' 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:44.304 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:44.305 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:44.305 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:44.564 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:44.565 12:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.135 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:51.136 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:51.136 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:51.136 Found net devices under 0000:86:00.0: cvl_0_0 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:51.136 Found net devices under 0000:86:00.1: cvl_0_1 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:51.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:31:51.136 00:31:51.136 --- 10.0.0.2 ping statistics --- 00:31:51.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.136 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:31:51.136 00:31:51.136 --- 10.0.0.1 ping statistics --- 00:31:51.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.136 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:51.136 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:51.137 only one NIC for nvmf test 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:51.137 rmmod nvme_tcp 00:31:51.137 rmmod nvme_fabrics 00:31:51.137 rmmod nvme_keyring 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:51.137 12:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.515 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.515 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:52.515 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:52.515 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.515 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:52.516 00:31:52.516 real 0m8.243s 00:31:52.516 user 0m1.858s 00:31:52.516 sys 0m4.407s 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:52.516 ************************************ 00:31:52.516 END TEST nvmf_target_multipath 00:31:52.516 ************************************ 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:52.516 ************************************ 00:31:52.516 START TEST nvmf_zcopy 00:31:52.516 ************************************ 00:31:52.516 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:52.776 * Looking for test storage... 00:31:52.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:52.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.776 --rc genhtml_branch_coverage=1 00:31:52.776 --rc genhtml_function_coverage=1 00:31:52.776 --rc genhtml_legend=1 00:31:52.776 --rc geninfo_all_blocks=1 00:31:52.776 --rc geninfo_unexecuted_blocks=1 00:31:52.776 00:31:52.776 ' 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:52.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.776 --rc genhtml_branch_coverage=1 00:31:52.776 --rc genhtml_function_coverage=1 00:31:52.776 --rc genhtml_legend=1 00:31:52.776 --rc geninfo_all_blocks=1 00:31:52.776 --rc geninfo_unexecuted_blocks=1 00:31:52.776 00:31:52.776 ' 00:31:52.776 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:52.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.776 --rc genhtml_branch_coverage=1 00:31:52.776 --rc genhtml_function_coverage=1 00:31:52.776 --rc genhtml_legend=1 00:31:52.776 --rc geninfo_all_blocks=1 00:31:52.776 --rc geninfo_unexecuted_blocks=1 00:31:52.777 00:31:52.777 ' 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:52.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:52.777 --rc genhtml_branch_coverage=1 00:31:52.777 --rc genhtml_function_coverage=1 00:31:52.777 --rc genhtml_legend=1 00:31:52.777 --rc geninfo_all_blocks=1 00:31:52.777 --rc geninfo_unexecuted_blocks=1 00:31:52.777 00:31:52.777 ' 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:52.777 12:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:59.361 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:59.361 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:59.361 Found net devices under 0000:86:00.0: cvl_0_0 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:59.361 Found net devices under 0000:86:00.1: cvl_0_1 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.361 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:31:59.362 00:31:59.362 --- 10.0.0.2 ping statistics --- 00:31:59.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.362 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:31:59.362 00:31:59.362 --- 10.0.0.1 ping statistics --- 00:31:59.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.362 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1846638 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1846638 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1846638 ']' 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.362 [2024-12-10 12:40:20.747472] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:59.362 [2024-12-10 12:40:20.748425] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:59.362 [2024-12-10 12:40:20.748461] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.362 [2024-12-10 12:40:20.830011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.362 [2024-12-10 12:40:20.870102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.362 [2024-12-10 12:40:20.870136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.362 [2024-12-10 12:40:20.870143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.362 [2024-12-10 12:40:20.870149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.362 [2024-12-10 12:40:20.870154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.362 [2024-12-10 12:40:20.870687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.362 [2024-12-10 12:40:20.937262] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:59.362 [2024-12-10 12:40:20.937477] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.362 12:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.362 [2024-12-10 12:40:21.007452] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.362 [2024-12-10 12:40:21.035674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.362 malloc0 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:59.362 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:59.362 { 00:31:59.362 "params": { 00:31:59.362 "name": "Nvme$subsystem", 00:31:59.362 "trtype": "$TEST_TRANSPORT", 00:31:59.362 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:59.362 "adrfam": "ipv4", 00:31:59.362 "trsvcid": "$NVMF_PORT", 00:31:59.362 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:59.362 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:59.362 "hdgst": ${hdgst:-false}, 00:31:59.362 "ddgst": ${ddgst:-false} 00:31:59.362 }, 00:31:59.362 "method": "bdev_nvme_attach_controller" 00:31:59.362 } 00:31:59.362 EOF 00:31:59.363 )") 00:31:59.363 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:59.363 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:59.363 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:59.363 12:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:59.363 "params": { 00:31:59.363 "name": "Nvme1", 00:31:59.363 "trtype": "tcp", 00:31:59.363 "traddr": "10.0.0.2", 00:31:59.363 "adrfam": "ipv4", 00:31:59.363 "trsvcid": "4420", 00:31:59.363 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:59.363 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:59.363 "hdgst": false, 00:31:59.363 "ddgst": false 00:31:59.363 }, 00:31:59.363 "method": "bdev_nvme_attach_controller" 00:31:59.363 }' 00:31:59.363 [2024-12-10 12:40:21.130007] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:31:59.363 [2024-12-10 12:40:21.130053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846702 ] 00:31:59.363 [2024-12-10 12:40:21.205672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.363 [2024-12-10 12:40:21.245714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.622 Running I/O for 10 seconds... 00:32:01.494 8301.00 IOPS, 64.85 MiB/s [2024-12-10T11:40:24.598Z] 8387.50 IOPS, 65.53 MiB/s [2024-12-10T11:40:25.974Z] 8404.00 IOPS, 65.66 MiB/s [2024-12-10T11:40:26.911Z] 8419.00 IOPS, 65.77 MiB/s [2024-12-10T11:40:27.847Z] 8430.60 IOPS, 65.86 MiB/s [2024-12-10T11:40:28.784Z] 8441.50 IOPS, 65.95 MiB/s [2024-12-10T11:40:29.724Z] 8446.29 IOPS, 65.99 MiB/s [2024-12-10T11:40:30.661Z] 8449.75 IOPS, 66.01 MiB/s [2024-12-10T11:40:31.597Z] 8445.67 IOPS, 65.98 MiB/s [2024-12-10T11:40:31.856Z] 8444.20 IOPS, 65.97 MiB/s 00:32:09.688 Latency(us) 00:32:09.688 [2024-12-10T11:40:31.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:09.688 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:09.688 Verification LBA range: start 0x0 length 0x1000 00:32:09.688 Nvme1n1 : 10.01 8447.11 65.99 0.00 0.00 15109.61 2322.25 21541.40 00:32:09.688 [2024-12-10T11:40:31.856Z] =================================================================================================================== 00:32:09.688 [2024-12-10T11:40:31.856Z] Total : 8447.11 65.99 0.00 0.00 15109.61 2322.25 21541.40 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1848307 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:09.688 { 00:32:09.688 "params": { 00:32:09.688 "name": "Nvme$subsystem", 00:32:09.688 "trtype": "$TEST_TRANSPORT", 00:32:09.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.688 "adrfam": "ipv4", 00:32:09.688 "trsvcid": "$NVMF_PORT", 00:32:09.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.688 "hdgst": ${hdgst:-false}, 00:32:09.688 "ddgst": ${ddgst:-false} 00:32:09.688 }, 00:32:09.688 "method": "bdev_nvme_attach_controller" 00:32:09.688 } 00:32:09.688 EOF 00:32:09.688 )") 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:09.688 [2024-12-10 12:40:31.763053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.688 [2024-12-10 12:40:31.763084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:09.688 12:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:09.688 "params": { 00:32:09.688 "name": "Nvme1", 00:32:09.688 "trtype": "tcp", 00:32:09.688 "traddr": "10.0.0.2", 00:32:09.688 "adrfam": "ipv4", 00:32:09.688 "trsvcid": "4420", 00:32:09.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:09.688 "hdgst": false, 00:32:09.688 "ddgst": false 00:32:09.688 }, 00:32:09.688 "method": "bdev_nvme_attach_controller" 00:32:09.688 }' 00:32:09.688 [2024-12-10 12:40:31.775013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.688 [2024-12-10 12:40:31.775026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.688 [2024-12-10 12:40:31.787009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.688 [2024-12-10 12:40:31.787019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.688 [2024-12-10 12:40:31.799005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.688 [2024-12-10 12:40:31.799014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.688 [2024-12-10 12:40:31.806397] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:32:09.688 [2024-12-10 12:40:31.806459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848307 ] 00:32:09.688 [2024-12-10 12:40:31.811007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.688 [2024-12-10 12:40:31.811020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.688 [2024-12-10 12:40:31.823005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.688 [2024-12-10 12:40:31.823016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.689 [2024-12-10 12:40:31.835005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.689 [2024-12-10 12:40:31.835015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.689 [2024-12-10 12:40:31.847007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.689 [2024-12-10 12:40:31.847016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.859007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.859016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.871007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.871016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.880167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.948 [2024-12-10 12:40:31.883006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.883016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.895016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.895034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.907008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.907017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.919009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.919024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.919901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.948 [2024-12-10 12:40:31.931017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.931032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.943016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.943036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.955009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.955026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.967009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.967023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.979011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.979025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:31.991009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:31.991021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:32.003018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:32.003036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:32.015018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:32.015038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:32.027014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:32.027028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:32.039010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:32.039023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:32.051007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:32.051016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:32.063007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:32.063016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:32.075013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:32.075028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:32.087013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:32.087026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:32.099008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:32.099017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:09.948 [2024-12-10 12:40:32.111009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:09.948 [2024-12-10 12:40:32.111019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.207 [2024-12-10 12:40:32.123007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.207 [2024-12-10 12:40:32.123019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.207 [2024-12-10 12:40:32.135009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.207 [2024-12-10 12:40:32.135022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.207 [2024-12-10 12:40:32.147007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.147017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.159006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.159017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.171011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.171024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.183005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.183016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.195009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.195018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.207008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.207017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.219359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.219377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.231015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.231028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 Running I/O for 5 seconds... 00:32:10.208 [2024-12-10 12:40:32.246449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.246468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.261112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.261131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.276070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.276090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.291041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.291059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.303672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.303690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.316544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.316567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.331658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.331677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.346213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.346232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.360127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.360147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.208 [2024-12-10 12:40:32.370104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.208 [2024-12-10 12:40:32.370122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.385009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.385027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.400005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.400023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.412686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.412704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.428071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.428088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.438587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.438605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.453020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.453042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.468106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.468124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.483376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.483394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.495107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.495125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.508913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.508931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.524501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.524520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.539797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.539815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.554871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.554890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.566490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.566509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.580996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.581015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.596318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.596337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.611625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.611642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.467 [2024-12-10 12:40:32.628036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.467 [2024-12-10 12:40:32.628055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.643461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.643478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.654620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.654638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.668931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.668949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.684286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.684305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.699296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.699314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.710895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.710914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.724973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.724996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.740586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.740605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.755584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.755602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.770675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.770694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.783728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.783746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.795313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.795331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.808868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.808886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.824163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.824185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.839451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.839469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.855403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.855421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.868499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.868517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.726 [2024-12-10 12:40:32.883691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.726 [2024-12-10 12:40:32.883709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:32.898729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:32.898748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:32.912042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:32.912060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:32.923445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:32.923464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:32.936992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:32.937010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:32.952491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:32.952510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:32.967677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:32.967694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:32.983710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:32.983729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:32.994323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:32.994346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:33.009298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:33.009318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:33.024187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:33.024224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:33.039573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:33.039592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:33.051062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:33.051081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:33.065444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:33.065462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:33.080594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:33.080612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:33.095795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:33.095813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:33.110709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:33.110727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:33.123171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:33.123190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:10.986 [2024-12-10 12:40:33.136904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:10.986 [2024-12-10 12:40:33.136922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.152437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.152457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.167318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.167336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.182900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.182919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.196564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.196582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.211233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.211251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.224792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.224810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 16239.00 IOPS, 126.87 MiB/s [2024-12-10T11:40:33.413Z] [2024-12-10 12:40:33.240253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.240271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.255325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.255343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.268264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.268282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.283991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.284009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.299478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.299495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.314933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.314953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.328817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.328836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.344419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.344442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.359416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.359436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.374965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.374985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.387805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.387825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.245 [2024-12-10 12:40:33.400689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.245 [2024-12-10 12:40:33.400709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.415897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.415915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.431212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.431230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.444977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.444996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.460469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.460488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.475403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.475431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.488296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.488315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.499040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.499058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.512849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.512868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.528376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.528395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.543213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.543233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.556754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.556775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.571774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.571793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.587190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.504 [2024-12-10 12:40:33.587209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.504 [2024-12-10 12:40:33.598399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.505 [2024-12-10 12:40:33.598429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.505 [2024-12-10 12:40:33.613123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.505 [2024-12-10 12:40:33.613141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.505 [2024-12-10 12:40:33.628324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.505 [2024-12-10 12:40:33.628342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.505 [2024-12-10 12:40:33.643680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.505 [2024-12-10 12:40:33.643698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.505 [2024-12-10 12:40:33.658618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.505 [2024-12-10 12:40:33.658637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.672712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.672731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.688269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.688287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.703436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.703453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.715297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.715315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.728997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.729016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.744506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.744524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.759673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.759692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.774864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.774883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.787834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.787854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.802760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.802779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.817372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.817391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.832687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.832706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.847630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.847648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.859519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.859537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.872623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.872642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.888129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.888147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.902887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.902905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.914794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.914812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:11.764 [2024-12-10 12:40:33.928997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:11.764 [2024-12-10 12:40:33.929015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:33.944357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:33.944376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:33.959263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:33.959281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:33.971708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:33.971726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:33.985067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:33.985085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.000278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.000296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.010760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.010778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.025310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.025328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.040442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.040460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.055738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.055756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.070852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.070872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.083325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.083343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.098495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.098514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.112394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.112424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.127668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.127686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.023 [2024-12-10 12:40:34.142645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.023 [2024-12-10 12:40:34.142663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.024 [2024-12-10 12:40:34.156459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.024 [2024-12-10 12:40:34.156478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.024 [2024-12-10 12:40:34.171703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.024 [2024-12-10 12:40:34.171721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.024 [2024-12-10 12:40:34.182991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.024 [2024-12-10 12:40:34.183009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.197163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.197181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.212178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.212196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.227319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.227337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 16289.00 IOPS, 127.26 MiB/s [2024-12-10T11:40:34.451Z] [2024-12-10 12:40:34.243115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.243134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.254691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.254710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.269002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.269020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.284338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.284356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.299252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.299270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.311810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.311829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.324392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.324410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.339468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.339490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.354842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.354860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.368963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.368981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.384234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.384252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.399121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.399141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.409908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.283 [2024-12-10 12:40:34.409927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.283 [2024-12-10 12:40:34.425198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.284 [2024-12-10 12:40:34.425217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.284 [2024-12-10 12:40:34.439988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.284 [2024-12-10 12:40:34.440006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.454762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.454781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.466393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.466412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.480900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.480919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.496029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.496047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.511268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.511286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.522186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.522204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.537095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.537114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.552238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.552256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.567451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.567469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.582781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.582800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.597031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.597049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.612648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.612670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.627473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.627490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.640002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.640020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.655055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.655073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.667620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.667637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.680620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.680639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.543 [2024-12-10 12:40:34.696047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.543 [2024-12-10 12:40:34.696065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.802 [2024-12-10 12:40:34.711215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.711233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.722632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.722651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.737189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.737207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.752747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.752765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.767673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.767690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.783443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.783462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.799054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.799073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.812941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.812962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.827957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.827976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.842812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.842833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.855079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.855099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.868722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.868740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.883953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.883976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.899055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.899075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.912070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.912088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.927502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.927520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.942831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.942849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:12.803 [2024-12-10 12:40:34.955782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:12.803 [2024-12-10 12:40:34.955801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.064 [2024-12-10 12:40:34.968661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.064 [2024-12-10 12:40:34.968680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.064 [2024-12-10 12:40:34.984072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:34.984091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:34.999090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:34.999109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.010646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.010665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.025127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.025146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.040407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.040427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.055601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.055619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.070829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.070848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.085503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.085522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.100855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.100874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.115855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.115874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.126165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.126200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.141510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.141529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.156846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.156864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.172101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.172119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.187292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.187311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.200485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.200503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.065 [2024-12-10 12:40:35.215909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.065 [2024-12-10 12:40:35.215927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.232199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.232217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 16314.33 IOPS, 127.46 MiB/s [2024-12-10T11:40:35.493Z] [2024-12-10 12:40:35.247446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.247463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.263563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.263581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.279326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.279344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.295177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.295195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.307982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.308000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.323062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.323080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.335825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.335842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.348921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.348939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.363891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.363908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.379115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.379134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.392965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.392984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.408024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.408042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.423360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.423378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.438596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.438614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.452020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.452040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.466969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.466988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.325 [2024-12-10 12:40:35.478746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.325 [2024-12-10 12:40:35.478765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.493125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.493144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.508469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.508488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.523788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.523807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.539085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.539104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.549858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.549876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.565084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.565103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.579776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.579794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.595042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.595060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.607978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.607996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.623576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.623594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.639135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.639153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.650002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.650020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.664944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.664962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.680055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.680073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.695349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.695371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.711230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.711249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.724828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.724845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.585 [2024-12-10 12:40:35.739933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.585 [2024-12-10 12:40:35.739951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.755098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.755117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.766424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.766442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.780749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.780767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.795925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.795943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.806289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.806307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.821057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.821074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.835762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.835780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.850865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.850883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.863349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.863366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.877229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.877247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.892693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.892711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.907692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.907710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.919538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.919556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.932821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.932839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.948055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.844 [2024-12-10 12:40:35.948074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.844 [2024-12-10 12:40:35.963267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.845 [2024-12-10 12:40:35.963289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.845 [2024-12-10 12:40:35.974708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.845 [2024-12-10 12:40:35.974727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.845 [2024-12-10 12:40:35.988975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.845 [2024-12-10 12:40:35.988994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:13.845 [2024-12-10 12:40:36.004219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:13.845 [2024-12-10 12:40:36.004239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.015599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.015617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.031039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.031059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.042860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.042879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.056582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.056601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.071489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.071506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.083028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.083046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.097148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.097173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.112424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.112443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.127394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.127414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.142770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.142788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.154420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.154438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.168777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.168795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.184291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.184309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.199315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.199333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.211790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.211808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.224615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.224639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.239824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.239843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 16331.25 IOPS, 127.59 MiB/s [2024-12-10T11:40:36.272Z] [2024-12-10 12:40:36.254583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.254602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.104 [2024-12-10 12:40:36.265899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.104 [2024-12-10 12:40:36.265919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.281021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.281041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.295805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.295826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.310982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.311000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.321475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.321494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.336661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.336680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.351325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.351342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.366551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.366570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.380941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.380960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.395946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.395964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.411170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.411189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.423738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.423756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.436470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.436488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.451438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.451456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.466741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.466760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.479094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.479113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.493231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.493255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.508471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.508490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.363 [2024-12-10 12:40:36.523182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.363 [2024-12-10 12:40:36.523202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.622 [2024-12-10 12:40:36.534912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.622 [2024-12-10 12:40:36.534931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.622 [2024-12-10 12:40:36.548759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.622 [2024-12-10 12:40:36.548778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.622 [2024-12-10 12:40:36.564037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.564056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.579067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.579086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.589686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.589705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.604948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.604968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.619773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.619794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.631042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.631061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.644337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.644356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.659180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.659198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.669343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.669362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.684493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.684512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.699057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.699076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.709890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.709908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.724897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.724916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.739681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.739699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.754687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.754706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.768151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.768175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.623 [2024-12-10 12:40:36.778605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.623 [2024-12-10 12:40:36.778623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.793014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.793032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.807914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.807933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.818555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.818574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.833147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.833173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.848109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.848127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.862913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.862936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.875919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.875937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.890917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.890935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.902358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.902376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.916652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.916670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.932635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.932653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.947488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.947505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.963481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.963498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.975871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.975894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:36.987281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:36.987300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:37.000866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:37.000884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:37.016205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:37.016224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:37.031529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:37.031547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:14.882 [2024-12-10 12:40:37.043197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:14.882 [2024-12-10 12:40:37.043215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.056989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.057008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.072122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.072140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.087059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.087078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.100347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.100364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.115216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.115233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.125970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.125988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.141122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.141141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.155863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.155881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.170913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.170932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.181948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.181967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.197174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.197192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.211963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.211981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.226907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.226925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.240617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.240635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 16373.40 IOPS, 127.92 MiB/s [2024-12-10T11:40:37.310Z] [2024-12-10 12:40:37.253012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.253030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 00:32:15.142 Latency(us) 00:32:15.142 [2024-12-10T11:40:37.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.142 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:15.142 Nvme1n1 : 5.01 16372.59 127.91 0.00 0.00 7809.62 2023.07 13050.21 00:32:15.142 [2024-12-10T11:40:37.310Z] =================================================================================================================== 00:32:15.142 [2024-12-10T11:40:37.310Z] Total : 16372.59 127.91 0.00 0.00 7809.62 2023.07 13050.21 00:32:15.142 [2024-12-10 12:40:37.263013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.263029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.275008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.275021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.287024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.287045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.142 [2024-12-10 12:40:37.299015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.142 [2024-12-10 12:40:37.299031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.402 [2024-12-10 12:40:37.311022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.402 [2024-12-10 12:40:37.311035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.402 [2024-12-10 12:40:37.323015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.402 [2024-12-10 12:40:37.323031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.402 [2024-12-10 12:40:37.335011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.402 [2024-12-10 12:40:37.335028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.402 [2024-12-10 12:40:37.347012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.402 [2024-12-10 12:40:37.347027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.402 [2024-12-10 12:40:37.359014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.402 [2024-12-10 12:40:37.359030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.402 [2024-12-10 12:40:37.371007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.402 [2024-12-10 12:40:37.371016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.402 [2024-12-10 12:40:37.383017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.402 [2024-12-10 12:40:37.383032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.402 [2024-12-10 12:40:37.395008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.402 [2024-12-10 12:40:37.395019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.402 [2024-12-10 12:40:37.407007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:15.402 [2024-12-10 12:40:37.407016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:15.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1848307) - No such process 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1848307 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:15.402 delay0 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.402 12:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:15.402 [2024-12-10 12:40:37.552852] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:23.565 Initializing NVMe Controllers 00:32:23.565 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:23.565 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:23.565 Initialization complete. Launching workers. 00:32:23.565 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 232, failed: 31576 00:32:23.565 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 31685, failed to submit 123 00:32:23.565 success 31613, unsuccessful 72, failed 0 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:23.565 rmmod nvme_tcp 00:32:23.565 rmmod nvme_fabrics 00:32:23.565 rmmod nvme_keyring 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1846638 ']' 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1846638 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1846638 ']' 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1846638 00:32:23.565 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:23.566 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.566 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1846638 00:32:23.566 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:23.566 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:23.566 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1846638' 00:32:23.566 killing process with pid 1846638 00:32:23.566 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1846638 00:32:23.566 12:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1846638 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.566 12:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.534 00:32:25.534 real 0m32.611s 00:32:25.534 user 0m42.296s 00:32:25.534 sys 0m13.056s 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.534 ************************************ 00:32:25.534 END TEST nvmf_zcopy 00:32:25.534 ************************************ 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.534 ************************************ 00:32:25.534 START TEST nvmf_nmic 00:32:25.534 ************************************ 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:25.534 * Looking for test storage... 00:32:25.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:25.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.534 --rc genhtml_branch_coverage=1 00:32:25.534 --rc genhtml_function_coverage=1 00:32:25.534 --rc genhtml_legend=1 00:32:25.534 --rc geninfo_all_blocks=1 00:32:25.534 --rc geninfo_unexecuted_blocks=1 00:32:25.534 00:32:25.534 ' 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:25.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.534 --rc genhtml_branch_coverage=1 00:32:25.534 --rc genhtml_function_coverage=1 00:32:25.534 --rc genhtml_legend=1 00:32:25.534 --rc geninfo_all_blocks=1 00:32:25.534 --rc geninfo_unexecuted_blocks=1 00:32:25.534 00:32:25.534 ' 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:25.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.534 --rc genhtml_branch_coverage=1 00:32:25.534 --rc genhtml_function_coverage=1 00:32:25.534 --rc genhtml_legend=1 00:32:25.534 --rc geninfo_all_blocks=1 00:32:25.534 --rc geninfo_unexecuted_blocks=1 00:32:25.534 00:32:25.534 ' 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:25.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.534 --rc genhtml_branch_coverage=1 00:32:25.534 --rc genhtml_function_coverage=1 00:32:25.534 --rc genhtml_legend=1 00:32:25.534 --rc geninfo_all_blocks=1 00:32:25.534 --rc geninfo_unexecuted_blocks=1 00:32:25.534 00:32:25.534 ' 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.534 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:25.535 12:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.107 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:32.108 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:32.108 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:32.108 Found net devices under 0000:86:00.0: cvl_0_0 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:32.108 Found net devices under 0000:86:00.1: cvl_0_1 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:32.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:32:32.108 00:32:32.108 --- 10.0.0.2 ping statistics --- 00:32:32.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.108 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:32:32.108 00:32:32.108 --- 10.0.0.1 ping statistics --- 00:32:32.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.108 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1853887 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1853887 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1853887 ']' 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.108 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.109 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.109 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.109 12:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.109 [2024-12-10 12:40:53.409438] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:32.109 [2024-12-10 12:40:53.410369] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:32:32.109 [2024-12-10 12:40:53.410404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.109 [2024-12-10 12:40:53.489978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:32.109 [2024-12-10 12:40:53.533028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.109 [2024-12-10 12:40:53.533065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.109 [2024-12-10 12:40:53.533072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.109 [2024-12-10 12:40:53.533078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.109 [2024-12-10 12:40:53.533083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.109 [2024-12-10 12:40:53.534530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.109 [2024-12-10 12:40:53.534564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:32.109 [2024-12-10 12:40:53.534672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.109 [2024-12-10 12:40:53.534673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:32.109 [2024-12-10 12:40:53.604326] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:32.109 [2024-12-10 12:40:53.604616] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:32.109 [2024-12-10 12:40:53.604647] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:32.109 [2024-12-10 12:40:53.604803] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:32.109 [2024-12-10 12:40:53.607668] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:32.109 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.109 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:32.109 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:32.109 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.109 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.368 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.368 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:32.368 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.368 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.368 [2024-12-10 12:40:54.283582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.368 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.368 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 Malloc0 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 [2024-12-10 12:40:54.367813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:32.369 test case1: single bdev can't be used in multiple subsystems 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 [2024-12-10 12:40:54.399208] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:32.369 [2024-12-10 12:40:54.399231] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:32.369 [2024-12-10 12:40:54.399238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:32.369 request: 00:32:32.369 { 00:32:32.369 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:32.369 "namespace": { 00:32:32.369 "bdev_name": "Malloc0", 00:32:32.369 "no_auto_visible": false, 00:32:32.369 "hide_metadata": false 00:32:32.369 }, 00:32:32.369 "method": "nvmf_subsystem_add_ns", 00:32:32.369 "req_id": 1 00:32:32.369 } 00:32:32.369 Got JSON-RPC error response 00:32:32.369 response: 00:32:32.369 { 00:32:32.369 "code": -32602, 00:32:32.369 "message": "Invalid parameters" 00:32:32.369 } 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:32.369 Adding namespace failed - expected result. 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:32.369 test case2: host connect to nvmf target in multiple paths 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:32.369 [2024-12-10 12:40:54.411318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.369 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:32.628 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:32.887 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:32.887 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:32.887 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:32.887 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:32.887 12:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:34.789 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:34.789 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:34.789 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:34.789 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:34.789 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:34.789 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:34.789 12:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:34.789 [global] 00:32:34.789 thread=1 00:32:34.789 invalidate=1 00:32:34.789 rw=write 00:32:34.789 time_based=1 00:32:34.789 runtime=1 00:32:34.789 ioengine=libaio 00:32:34.789 direct=1 00:32:34.789 bs=4096 00:32:34.789 iodepth=1 00:32:34.789 norandommap=0 00:32:34.789 numjobs=1 00:32:34.789 00:32:34.789 verify_dump=1 00:32:34.789 verify_backlog=512 00:32:34.789 verify_state_save=0 00:32:34.789 do_verify=1 00:32:34.789 verify=crc32c-intel 00:32:34.789 [job0] 00:32:34.789 filename=/dev/nvme0n1 00:32:34.789 Could not set queue depth (nvme0n1) 00:32:35.047 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:35.047 fio-3.35 00:32:35.047 Starting 1 thread 00:32:36.425 00:32:36.425 job0: (groupid=0, jobs=1): err= 0: pid=1854728: Tue Dec 10 12:40:58 2024 00:32:36.425 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:32:36.425 slat (nsec): min=6954, max=42088, avg=7915.51, stdev=1295.47 00:32:36.425 clat (usec): min=162, max=284, avg=201.84, stdev=22.18 00:32:36.425 lat (usec): min=187, max=292, avg=209.75, stdev=22.23 00:32:36.425 clat percentiles (usec): 00:32:36.425 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 186], 20.00th=[ 188], 00:32:36.425 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 192], 60.00th=[ 194], 00:32:36.425 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 247], 95.00th=[ 249], 00:32:36.425 | 99.00th=[ 253], 99.50th=[ 255], 99.90th=[ 265], 99.95th=[ 269], 00:32:36.425 | 99.99th=[ 285] 00:32:36.425 write: IOPS=2974, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec); 0 zone resets 00:32:36.425 slat (nsec): min=6103, max=40570, avg=11113.25, stdev=1863.46 00:32:36.425 clat (usec): min=121, max=281, avg=138.92, stdev= 7.68 00:32:36.425 lat (usec): min=128, max=316, avg=150.03, stdev= 8.21 00:32:36.425 clat percentiles (usec): 00:32:36.425 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 135], 00:32:36.425 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 139], 00:32:36.425 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 145], 95.00th=[ 147], 00:32:36.425 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 219], 99.95th=[ 233], 00:32:36.425 | 99.99th=[ 281] 00:32:36.425 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:32:36.425 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:32:36.425 lat (usec) : 250=98.66%, 500=1.34% 00:32:36.425 cpu : usr=5.30%, sys=7.80%, ctx=5537, majf=0, minf=1 00:32:36.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.426 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.426 issued rwts: total=2560,2977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:36.426 00:32:36.426 Run status group 0 (all jobs): 00:32:36.426 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:32:36.426 WRITE: bw=11.6MiB/s (12.2MB/s), 11.6MiB/s-11.6MiB/s (12.2MB/s-12.2MB/s), io=11.6MiB (12.2MB), run=1001-1001msec 00:32:36.426 00:32:36.426 Disk stats (read/write): 00:32:36.426 nvme0n1: ios=2460/2560, merge=0/0, ticks=481/335, in_queue=816, util=90.98% 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:36.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:36.426 rmmod nvme_tcp 00:32:36.426 rmmod nvme_fabrics 00:32:36.426 rmmod nvme_keyring 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1853887 ']' 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1853887 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1853887 ']' 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1853887 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:36.426 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1853887 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1853887' 00:32:36.685 killing process with pid 1853887 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1853887 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1853887 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.685 12:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.224 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:39.224 00:32:39.224 real 0m13.597s 00:32:39.224 user 0m23.989s 00:32:39.224 sys 0m6.143s 00:32:39.224 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:39.224 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:39.224 ************************************ 00:32:39.224 END TEST nvmf_nmic 00:32:39.224 ************************************ 00:32:39.224 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:39.224 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:39.224 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:39.224 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:39.224 ************************************ 00:32:39.224 START TEST nvmf_fio_target 00:32:39.224 ************************************ 00:32:39.224 12:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:39.224 * Looking for test storage... 00:32:39.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:39.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.224 --rc genhtml_branch_coverage=1 00:32:39.224 --rc genhtml_function_coverage=1 00:32:39.224 --rc genhtml_legend=1 00:32:39.224 --rc geninfo_all_blocks=1 00:32:39.224 --rc geninfo_unexecuted_blocks=1 00:32:39.224 00:32:39.224 ' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:39.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.224 --rc genhtml_branch_coverage=1 00:32:39.224 --rc genhtml_function_coverage=1 00:32:39.224 --rc genhtml_legend=1 00:32:39.224 --rc geninfo_all_blocks=1 00:32:39.224 --rc geninfo_unexecuted_blocks=1 00:32:39.224 00:32:39.224 ' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:39.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.224 --rc genhtml_branch_coverage=1 00:32:39.224 --rc genhtml_function_coverage=1 00:32:39.224 --rc genhtml_legend=1 00:32:39.224 --rc geninfo_all_blocks=1 00:32:39.224 --rc geninfo_unexecuted_blocks=1 00:32:39.224 00:32:39.224 ' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:39.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:39.224 --rc genhtml_branch_coverage=1 00:32:39.224 --rc genhtml_function_coverage=1 00:32:39.224 --rc genhtml_legend=1 00:32:39.224 --rc geninfo_all_blocks=1 00:32:39.224 --rc geninfo_unexecuted_blocks=1 00:32:39.224 00:32:39.224 ' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:39.224 12:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.796 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:45.797 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:45.797 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:45.797 Found net devices under 0000:86:00.0: cvl_0_0 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:45.797 Found net devices under 0000:86:00.1: cvl_0_1 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:45.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:45.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:32:45.797 00:32:45.797 --- 10.0.0.2 ping statistics --- 00:32:45.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.797 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:45.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:45.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:32:45.797 00:32:45.797 --- 10.0.0.1 ping statistics --- 00:32:45.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:45.797 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.797 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1858325 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1858325 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1858325 ']' 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:45.798 12:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.798 [2024-12-10 12:41:07.034265] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:45.798 [2024-12-10 12:41:07.035188] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:32:45.798 [2024-12-10 12:41:07.035222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.798 [2024-12-10 12:41:07.114195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:45.798 [2024-12-10 12:41:07.155835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:45.798 [2024-12-10 12:41:07.155872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:45.798 [2024-12-10 12:41:07.155879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:45.798 [2024-12-10 12:41:07.155885] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:45.798 [2024-12-10 12:41:07.155890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:45.798 [2024-12-10 12:41:07.157446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:45.798 [2024-12-10 12:41:07.157555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:45.798 [2024-12-10 12:41:07.157660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.798 [2024-12-10 12:41:07.157662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:45.798 [2024-12-10 12:41:07.226120] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:45.798 [2024-12-10 12:41:07.226525] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:45.798 [2024-12-10 12:41:07.227173] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:45.798 [2024-12-10 12:41:07.227553] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:45.798 [2024-12-10 12:41:07.227593] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:45.798 [2024-12-10 12:41:07.462449] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:45.798 12:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.057 12:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:46.057 12:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.316 12:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:46.316 12:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:46.575 12:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:46.834 12:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:46.834 12:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.092 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:47.092 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:47.092 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:47.092 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:47.351 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:47.610 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:47.610 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:47.868 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:47.868 12:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:47.869 12:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.127 [2024-12-10 12:41:10.186367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.127 12:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:48.387 12:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:48.644 12:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:48.903 12:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:48.903 12:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:48.903 12:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:48.903 12:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:48.903 12:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:48.903 12:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:50.807 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:50.807 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:50.807 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:50.807 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:50.807 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:50.807 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:50.807 12:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:50.807 [global] 00:32:50.807 thread=1 00:32:50.807 invalidate=1 00:32:50.807 rw=write 00:32:50.807 time_based=1 00:32:50.807 runtime=1 00:32:50.807 ioengine=libaio 00:32:50.807 direct=1 00:32:50.807 bs=4096 00:32:50.807 iodepth=1 00:32:50.807 norandommap=0 00:32:50.807 numjobs=1 00:32:50.807 00:32:50.807 verify_dump=1 00:32:50.807 verify_backlog=512 00:32:50.807 verify_state_save=0 00:32:50.807 do_verify=1 00:32:50.807 verify=crc32c-intel 00:32:50.807 [job0] 00:32:50.807 filename=/dev/nvme0n1 00:32:50.807 [job1] 00:32:50.807 filename=/dev/nvme0n2 00:32:50.807 [job2] 00:32:50.807 filename=/dev/nvme0n3 00:32:50.807 [job3] 00:32:50.807 filename=/dev/nvme0n4 00:32:50.807 Could not set queue depth (nvme0n1) 00:32:50.807 Could not set queue depth (nvme0n2) 00:32:50.807 Could not set queue depth (nvme0n3) 00:32:50.807 Could not set queue depth (nvme0n4) 00:32:51.066 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.066 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.066 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.066 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:51.066 fio-3.35 00:32:51.066 Starting 4 threads 00:32:52.444 00:32:52.444 job0: (groupid=0, jobs=1): err= 0: pid=1859599: Tue Dec 10 12:41:14 2024 00:32:52.444 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:32:52.444 slat (nsec): min=9669, max=30115, avg=21691.45, stdev=3310.86 00:32:52.444 clat (usec): min=40816, max=41098, avg=40962.72, stdev=69.51 00:32:52.444 lat (usec): min=40825, max=41120, avg=40984.41, stdev=71.38 00:32:52.444 clat percentiles (usec): 00:32:52.444 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:52.444 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:52.444 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:52.444 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:52.444 | 99.99th=[41157] 00:32:52.444 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:32:52.444 slat (nsec): min=10118, max=54646, avg=11475.35, stdev=2720.79 00:32:52.444 clat (usec): min=158, max=346, avg=191.15, stdev=19.56 00:32:52.444 lat (usec): min=170, max=356, avg=202.62, stdev=20.51 00:32:52.444 clat percentiles (usec): 00:32:52.444 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:32:52.444 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:32:52.444 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 221], 00:32:52.444 | 99.00th=[ 253], 99.50th=[ 285], 99.90th=[ 347], 99.95th=[ 347], 00:32:52.444 | 99.99th=[ 347] 00:32:52.444 bw ( KiB/s): min= 4096, max= 4096, per=15.86%, avg=4096.00, stdev= 0.00, samples=1 00:32:52.444 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:52.444 lat (usec) : 250=94.76%, 500=1.12% 00:32:52.444 lat (msec) : 50=4.12% 00:32:52.444 cpu : usr=0.89%, sys=0.40%, ctx=534, majf=0, minf=1 00:32:52.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.444 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:52.444 job1: (groupid=0, jobs=1): err= 0: pid=1859601: Tue Dec 10 12:41:14 2024 00:32:52.444 read: IOPS=1038, BW=4156KiB/s (4255kB/s)(4164KiB/1002msec) 00:32:52.444 slat (nsec): min=3750, max=34719, avg=6977.92, stdev=3070.92 00:32:52.444 clat (usec): min=184, max=41117, avg=628.04, stdev=3809.72 00:32:52.444 lat (usec): min=190, max=41126, avg=635.02, stdev=3810.49 00:32:52.444 clat percentiles (usec): 00:32:52.444 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 223], 20.00th=[ 237], 00:32:52.444 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:32:52.444 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 314], 00:32:52.444 | 99.00th=[ 490], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:52.444 | 99.99th=[41157] 00:32:52.444 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:32:52.444 slat (usec): min=6, max=19939, avg=32.90, stdev=620.51 00:32:52.444 clat (usec): min=122, max=387, avg=184.91, stdev=29.80 00:32:52.444 lat (usec): min=129, max=20235, avg=217.80, stdev=625.91 00:32:52.444 clat percentiles (usec): 00:32:52.444 | 1.00th=[ 135], 5.00th=[ 151], 10.00th=[ 161], 20.00th=[ 167], 00:32:52.444 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:32:52.444 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 249], 00:32:52.444 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 347], 99.95th=[ 388], 00:32:52.444 | 99.99th=[ 388] 00:32:52.444 bw ( KiB/s): min= 4096, max= 8192, per=23.79%, avg=6144.00, stdev=2896.31, samples=2 00:32:52.444 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:32:52.444 lat (usec) : 250=72.56%, 500=27.05% 00:32:52.444 lat (msec) : 20=0.04%, 50=0.35% 00:32:52.444 cpu : usr=1.70%, sys=1.70%, ctx=2580, majf=0, minf=1 00:32:52.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.444 issued rwts: total=1041,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:52.444 job2: (groupid=0, jobs=1): err= 0: pid=1859602: Tue Dec 10 12:41:14 2024 00:32:52.444 read: IOPS=1493, BW=5975KiB/s (6118kB/s)(6160KiB/1031msec) 00:32:52.444 slat (nsec): min=3407, max=51187, avg=8438.22, stdev=3144.07 00:32:52.444 clat (usec): min=184, max=41293, avg=421.56, stdev=2490.45 00:32:52.444 lat (usec): min=191, max=41300, avg=429.99, stdev=2490.63 00:32:52.444 clat percentiles (usec): 00:32:52.444 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 237], 00:32:52.444 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:32:52.444 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 302], 00:32:52.444 | 99.00th=[ 494], 99.50th=[ 5997], 99.90th=[41157], 99.95th=[41157], 00:32:52.444 | 99.99th=[41157] 00:32:52.444 write: IOPS=1986, BW=7946KiB/s (8136kB/s)(8192KiB/1031msec); 0 zone resets 00:32:52.444 slat (nsec): min=3461, max=57592, avg=9325.10, stdev=5511.35 00:32:52.444 clat (usec): min=116, max=286, avg=165.99, stdev=24.16 00:32:52.444 lat (usec): min=123, max=296, avg=175.32, stdev=26.83 00:32:52.444 clat percentiles (usec): 00:32:52.444 | 1.00th=[ 123], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 143], 00:32:52.444 | 30.00th=[ 149], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 174], 00:32:52.444 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 206], 00:32:52.444 | 99.00th=[ 223], 99.50th=[ 229], 99.90th=[ 262], 99.95th=[ 277], 00:32:52.444 | 99.99th=[ 285] 00:32:52.444 bw ( KiB/s): min= 8192, max= 8192, per=31.72%, avg=8192.00, stdev= 0.00, samples=2 00:32:52.444 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:32:52.444 lat (usec) : 250=82.69%, 500=17.00%, 750=0.06%, 1000=0.03% 00:32:52.444 lat (msec) : 10=0.03%, 20=0.03%, 50=0.17% 00:32:52.444 cpu : usr=2.04%, sys=3.88%, ctx=3591, majf=0, minf=1 00:32:52.444 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.444 issued rwts: total=1540,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:52.444 job3: (groupid=0, jobs=1): err= 0: pid=1859603: Tue Dec 10 12:41:14 2024 00:32:52.444 read: IOPS=2124, BW=8500KiB/s (8703kB/s)(8508KiB/1001msec) 00:32:52.444 slat (nsec): min=4659, max=21051, avg=7667.70, stdev=1519.19 00:32:52.444 clat (usec): min=182, max=586, avg=248.42, stdev=47.28 00:32:52.445 lat (usec): min=187, max=594, avg=256.09, stdev=47.78 00:32:52.445 clat percentiles (usec): 00:32:52.445 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 219], 00:32:52.445 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:32:52.445 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 310], 95.00th=[ 334], 00:32:52.445 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 510], 99.95th=[ 553], 00:32:52.445 | 99.99th=[ 586] 00:32:52.445 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:52.445 slat (nsec): min=5515, max=43680, avg=10624.64, stdev=2662.97 00:32:52.445 clat (usec): min=120, max=321, avg=162.35, stdev=26.09 00:32:52.445 lat (usec): min=130, max=332, avg=172.97, stdev=27.34 00:32:52.445 clat percentiles (usec): 00:32:52.445 | 1.00th=[ 128], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:32:52.445 | 30.00th=[ 143], 40.00th=[ 153], 50.00th=[ 161], 60.00th=[ 167], 00:32:52.445 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 208], 00:32:52.445 | 99.00th=[ 229], 99.50th=[ 239], 99.90th=[ 310], 99.95th=[ 318], 00:32:52.445 | 99.99th=[ 322] 00:32:52.445 bw ( KiB/s): min= 9872, max= 9872, per=38.23%, avg=9872.00, stdev= 0.00, samples=1 00:32:52.445 iops : min= 2468, max= 2468, avg=2468.00, stdev= 0.00, samples=1 00:32:52.445 lat (usec) : 250=85.60%, 500=14.32%, 750=0.09% 00:32:52.445 cpu : usr=3.60%, sys=6.20%, ctx=4687, majf=0, minf=2 00:32:52.445 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:52.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.445 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:52.445 issued rwts: total=2127,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:52.445 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:52.445 00:32:52.445 Run status group 0 (all jobs): 00:32:52.445 READ: bw=17.9MiB/s (18.8MB/s), 87.4KiB/s-8500KiB/s (89.5kB/s-8703kB/s), io=18.5MiB (19.4MB), run=1001-1031msec 00:32:52.445 WRITE: bw=25.2MiB/s (26.4MB/s), 2034KiB/s-9.99MiB/s (2083kB/s-10.5MB/s), io=26.0MiB (27.3MB), run=1001-1031msec 00:32:52.445 00:32:52.445 Disk stats (read/write): 00:32:52.445 nvme0n1: ios=67/512, merge=0/0, ticks=718/93, in_queue=811, util=84.47% 00:32:52.445 nvme0n2: ios=1080/1536, merge=0/0, ticks=993/272, in_queue=1265, util=88.04% 00:32:52.445 nvme0n3: ios=1593/1804, merge=0/0, ticks=1370/288, in_queue=1658, util=91.39% 00:32:52.445 nvme0n4: ios=1743/2048, merge=0/0, ticks=468/319, in_queue=787, util=95.38% 00:32:52.445 12:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:52.445 [global] 00:32:52.445 thread=1 00:32:52.445 invalidate=1 00:32:52.445 rw=randwrite 00:32:52.445 time_based=1 00:32:52.445 runtime=1 00:32:52.445 ioengine=libaio 00:32:52.445 direct=1 00:32:52.445 bs=4096 00:32:52.445 iodepth=1 00:32:52.445 norandommap=0 00:32:52.445 numjobs=1 00:32:52.445 00:32:52.445 verify_dump=1 00:32:52.445 verify_backlog=512 00:32:52.445 verify_state_save=0 00:32:52.445 do_verify=1 00:32:52.445 verify=crc32c-intel 00:32:52.445 [job0] 00:32:52.445 filename=/dev/nvme0n1 00:32:52.445 [job1] 00:32:52.445 filename=/dev/nvme0n2 00:32:52.445 [job2] 00:32:52.445 filename=/dev/nvme0n3 00:32:52.445 [job3] 00:32:52.445 filename=/dev/nvme0n4 00:32:52.445 Could not set queue depth (nvme0n1) 00:32:52.445 Could not set queue depth (nvme0n2) 00:32:52.445 Could not set queue depth (nvme0n3) 00:32:52.445 Could not set queue depth (nvme0n4) 00:32:52.704 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.704 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.704 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.704 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:52.704 fio-3.35 00:32:52.704 Starting 4 threads 00:32:54.079 00:32:54.079 job0: (groupid=0, jobs=1): err= 0: pid=1859965: Tue Dec 10 12:41:16 2024 00:32:54.079 read: IOPS=174, BW=697KiB/s (713kB/s)(700KiB/1005msec) 00:32:54.079 slat (nsec): min=7292, max=35362, avg=9616.69, stdev=4266.41 00:32:54.079 clat (usec): min=197, max=41215, avg=5132.49, stdev=13270.60 00:32:54.079 lat (usec): min=204, max=41222, avg=5142.10, stdev=13273.88 00:32:54.079 clat percentiles (usec): 00:32:54.079 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 235], 20.00th=[ 241], 00:32:54.079 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:32:54.079 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[41157], 95.00th=[41157], 00:32:54.079 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:54.079 | 99.99th=[41157] 00:32:54.079 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:32:54.079 slat (nsec): min=10648, max=46470, avg=14130.05, stdev=4363.07 00:32:54.080 clat (usec): min=143, max=283, avg=186.24, stdev=19.01 00:32:54.080 lat (usec): min=169, max=329, avg=200.37, stdev=19.67 00:32:54.080 clat percentiles (usec): 00:32:54.080 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:32:54.080 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:32:54.080 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 233], 00:32:54.080 | 99.00th=[ 247], 99.50th=[ 262], 99.90th=[ 285], 99.95th=[ 285], 00:32:54.080 | 99.99th=[ 285] 00:32:54.080 bw ( KiB/s): min= 4096, max= 4096, per=20.86%, avg=4096.00, stdev= 0.00, samples=1 00:32:54.080 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:54.080 lat (usec) : 250=88.79%, 500=8.15% 00:32:54.080 lat (msec) : 50=3.06% 00:32:54.080 cpu : usr=0.90%, sys=0.90%, ctx=688, majf=0, minf=1 00:32:54.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.080 issued rwts: total=175,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.080 job1: (groupid=0, jobs=1): err= 0: pid=1859966: Tue Dec 10 12:41:16 2024 00:32:54.080 read: IOPS=638, BW=2553KiB/s (2615kB/s)(2556KiB/1001msec) 00:32:54.080 slat (nsec): min=6605, max=30921, avg=7835.58, stdev=2025.55 00:32:54.080 clat (usec): min=207, max=41221, avg=1267.32, stdev=6364.90 00:32:54.080 lat (usec): min=215, max=41239, avg=1275.15, stdev=6366.09 00:32:54.080 clat percentiles (usec): 00:32:54.080 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 241], 00:32:54.080 | 30.00th=[ 245], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 249], 00:32:54.080 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 258], 95.00th=[ 293], 00:32:54.080 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:54.080 | 99.99th=[41157] 00:32:54.080 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:32:54.080 slat (nsec): min=9437, max=36967, avg=11007.74, stdev=2190.80 00:32:54.080 clat (usec): min=136, max=361, avg=166.77, stdev=20.32 00:32:54.080 lat (usec): min=146, max=398, avg=177.78, stdev=20.98 00:32:54.080 clat percentiles (usec): 00:32:54.080 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:32:54.080 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:32:54.080 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 204], 00:32:54.080 | 99.00th=[ 241], 99.50th=[ 269], 99.90th=[ 326], 99.95th=[ 363], 00:32:54.080 | 99.99th=[ 363] 00:32:54.080 bw ( KiB/s): min= 5352, max= 5352, per=27.25%, avg=5352.00, stdev= 0.00, samples=1 00:32:54.080 iops : min= 1338, max= 1338, avg=1338.00, stdev= 0.00, samples=1 00:32:54.080 lat (usec) : 250=88.70%, 500=10.22%, 750=0.12% 00:32:54.080 lat (msec) : 50=0.96% 00:32:54.080 cpu : usr=0.90%, sys=1.50%, ctx=1664, majf=0, minf=1 00:32:54.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.080 issued rwts: total=639,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.080 job2: (groupid=0, jobs=1): err= 0: pid=1859967: Tue Dec 10 12:41:16 2024 00:32:54.080 read: IOPS=997, BW=3988KiB/s (4084kB/s)(3992KiB/1001msec) 00:32:54.080 slat (nsec): min=7482, max=43346, avg=10783.25, stdev=4534.81 00:32:54.080 clat (usec): min=201, max=41066, avg=772.69, stdev=4370.60 00:32:54.080 lat (usec): min=214, max=41090, avg=783.48, stdev=4371.79 00:32:54.080 clat percentiles (usec): 00:32:54.080 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:32:54.080 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:32:54.080 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 449], 95.00th=[ 461], 00:32:54.080 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:54.080 | 99.99th=[41157] 00:32:54.080 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:32:54.080 slat (nsec): min=10671, max=81076, avg=14655.91, stdev=6614.69 00:32:54.080 clat (usec): min=133, max=387, avg=191.03, stdev=22.34 00:32:54.080 lat (usec): min=145, max=415, avg=205.68, stdev=24.47 00:32:54.080 clat percentiles (usec): 00:32:54.080 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:32:54.080 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:32:54.080 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 217], 95.00th=[ 233], 00:32:54.080 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 379], 99.95th=[ 388], 00:32:54.080 | 99.99th=[ 388] 00:32:54.080 bw ( KiB/s): min= 4096, max= 4096, per=20.86%, avg=4096.00, stdev= 0.00, samples=1 00:32:54.080 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:54.080 lat (usec) : 250=60.48%, 500=38.77%, 750=0.15% 00:32:54.080 lat (msec) : 50=0.59% 00:32:54.080 cpu : usr=1.50%, sys=3.90%, ctx=2024, majf=0, minf=1 00:32:54.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.080 issued rwts: total=998,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.080 job3: (groupid=0, jobs=1): err= 0: pid=1859968: Tue Dec 10 12:41:16 2024 00:32:54.080 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:54.080 slat (nsec): min=7284, max=37149, avg=8526.41, stdev=1332.49 00:32:54.080 clat (usec): min=208, max=550, avg=256.11, stdev=48.19 00:32:54.080 lat (usec): min=216, max=559, avg=264.63, stdev=48.17 00:32:54.080 clat percentiles (usec): 00:32:54.080 | 1.00th=[ 217], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:32:54.080 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:32:54.080 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 289], 95.00th=[ 424], 00:32:54.080 | 99.00th=[ 449], 99.50th=[ 457], 99.90th=[ 506], 99.95th=[ 510], 00:32:54.080 | 99.99th=[ 553] 00:32:54.080 write: IOPS=2371, BW=9487KiB/s (9714kB/s)(9496KiB/1001msec); 0 zone resets 00:32:54.080 slat (usec): min=10, max=118, avg=11.66, stdev= 4.19 00:32:54.080 clat (usec): min=127, max=383, avg=175.84, stdev=24.48 00:32:54.080 lat (usec): min=137, max=393, avg=187.49, stdev=25.46 00:32:54.080 clat percentiles (usec): 00:32:54.080 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 155], 00:32:54.080 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:32:54.080 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 215], 00:32:54.080 | 99.00th=[ 255], 99.50th=[ 285], 99.90th=[ 322], 99.95th=[ 363], 00:32:54.080 | 99.99th=[ 383] 00:32:54.080 bw ( KiB/s): min=10104, max=10104, per=51.45%, avg=10104.00, stdev= 0.00, samples=1 00:32:54.080 iops : min= 2526, max= 2526, avg=2526.00, stdev= 0.00, samples=1 00:32:54.080 lat (usec) : 250=84.26%, 500=15.65%, 750=0.09% 00:32:54.080 cpu : usr=3.10%, sys=7.80%, ctx=4422, majf=0, minf=2 00:32:54.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:54.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:54.080 issued rwts: total=2048,2374,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:54.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:54.080 00:32:54.080 Run status group 0 (all jobs): 00:32:54.080 READ: bw=15.0MiB/s (15.7MB/s), 697KiB/s-8184KiB/s (713kB/s-8380kB/s), io=15.1MiB (15.8MB), run=1001-1005msec 00:32:54.080 WRITE: bw=19.2MiB/s (20.1MB/s), 2038KiB/s-9487KiB/s (2087kB/s-9714kB/s), io=19.3MiB (20.2MB), run=1001-1005msec 00:32:54.080 00:32:54.080 Disk stats (read/write): 00:32:54.080 nvme0n1: ios=209/512, merge=0/0, ticks=1567/90, in_queue=1657, util=98.20% 00:32:54.080 nvme0n2: ios=670/1024, merge=0/0, ticks=1520/159, in_queue=1679, util=97.13% 00:32:54.080 nvme0n3: ios=558/736, merge=0/0, ticks=1275/132, in_queue=1407, util=97.19% 00:32:54.080 nvme0n4: ios=1557/2048, merge=0/0, ticks=388/326, in_queue=714, util=89.20% 00:32:54.080 12:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:54.080 [global] 00:32:54.080 thread=1 00:32:54.080 invalidate=1 00:32:54.080 rw=write 00:32:54.080 time_based=1 00:32:54.080 runtime=1 00:32:54.080 ioengine=libaio 00:32:54.080 direct=1 00:32:54.080 bs=4096 00:32:54.080 iodepth=128 00:32:54.080 norandommap=0 00:32:54.080 numjobs=1 00:32:54.080 00:32:54.080 verify_dump=1 00:32:54.080 verify_backlog=512 00:32:54.080 verify_state_save=0 00:32:54.080 do_verify=1 00:32:54.080 verify=crc32c-intel 00:32:54.080 [job0] 00:32:54.080 filename=/dev/nvme0n1 00:32:54.080 [job1] 00:32:54.080 filename=/dev/nvme0n2 00:32:54.080 [job2] 00:32:54.080 filename=/dev/nvme0n3 00:32:54.080 [job3] 00:32:54.080 filename=/dev/nvme0n4 00:32:54.080 Could not set queue depth (nvme0n1) 00:32:54.080 Could not set queue depth (nvme0n2) 00:32:54.080 Could not set queue depth (nvme0n3) 00:32:54.080 Could not set queue depth (nvme0n4) 00:32:54.365 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:54.365 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:54.365 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:54.365 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:54.365 fio-3.35 00:32:54.365 Starting 4 threads 00:32:55.752 00:32:55.752 job0: (groupid=0, jobs=1): err= 0: pid=1860343: Tue Dec 10 12:41:17 2024 00:32:55.752 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:32:55.752 slat (nsec): min=1108, max=23778k, avg=94817.63, stdev=759676.82 00:32:55.752 clat (usec): min=3229, max=49844, avg=13060.75, stdev=6688.02 00:32:55.752 lat (usec): min=3249, max=49865, avg=13155.57, stdev=6729.66 00:32:55.752 clat percentiles (usec): 00:32:55.752 | 1.00th=[ 4228], 5.00th=[ 7111], 10.00th=[ 9110], 20.00th=[ 9765], 00:32:55.752 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:32:55.752 | 70.00th=[11994], 80.00th=[14353], 90.00th=[25035], 95.00th=[27395], 00:32:55.752 | 99.00th=[42206], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:32:55.752 | 99.99th=[50070] 00:32:55.752 write: IOPS=5426, BW=21.2MiB/s (22.2MB/s)(21.4MiB/1008msec); 0 zone resets 00:32:55.752 slat (nsec): min=1936, max=12538k, avg=73493.12, stdev=509317.95 00:32:55.752 clat (usec): min=376, max=28031, avg=11169.64, stdev=3583.06 00:32:55.752 lat (usec): min=385, max=29736, avg=11243.14, stdev=3617.67 00:32:55.752 clat percentiles (usec): 00:32:55.752 | 1.00th=[ 2114], 5.00th=[ 6063], 10.00th=[ 7111], 20.00th=[ 9503], 00:32:55.752 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:32:55.752 | 70.00th=[11994], 80.00th=[13042], 90.00th=[16057], 95.00th=[17695], 00:32:55.752 | 99.00th=[22676], 99.50th=[23462], 99.90th=[24773], 99.95th=[24773], 00:32:55.752 | 99.99th=[27919] 00:32:55.752 bw ( KiB/s): min=20480, max=22256, per=29.10%, avg=21368.00, stdev=1255.82, samples=2 00:32:55.752 iops : min= 5120, max= 5564, avg=5342.00, stdev=313.96, samples=2 00:32:55.752 lat (usec) : 500=0.06%, 750=0.06%, 1000=0.22% 00:32:55.752 lat (msec) : 2=0.18%, 4=0.76%, 10=25.68%, 20=65.41%, 50=7.64% 00:32:55.752 cpu : usr=4.07%, sys=4.87%, ctx=551, majf=0, minf=1 00:32:55.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:55.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:55.752 issued rwts: total=5120,5470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:55.752 job1: (groupid=0, jobs=1): err= 0: pid=1860344: Tue Dec 10 12:41:17 2024 00:32:55.752 read: IOPS=5342, BW=20.9MiB/s (21.9MB/s)(20.9MiB/1003msec) 00:32:55.752 slat (nsec): min=1324, max=10874k, avg=98752.04, stdev=613966.50 00:32:55.752 clat (usec): min=836, max=25775, avg=12292.64, stdev=2976.34 00:32:55.752 lat (usec): min=6153, max=25783, avg=12391.39, stdev=2993.97 00:32:55.752 clat percentiles (usec): 00:32:55.752 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10290], 00:32:55.752 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11469], 60.00th=[12125], 00:32:55.752 | 70.00th=[12780], 80.00th=[14353], 90.00th=[15926], 95.00th=[18482], 00:32:55.752 | 99.00th=[23462], 99.50th=[23725], 99.90th=[25822], 99.95th=[25822], 00:32:55.752 | 99.99th=[25822] 00:32:55.752 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:32:55.752 slat (usec): min=2, max=9560, avg=78.17, stdev=442.80 00:32:55.752 clat (usec): min=1507, max=22915, avg=10865.38, stdev=2321.02 00:32:55.753 lat (usec): min=1520, max=22920, avg=10943.55, stdev=2342.10 00:32:55.753 clat percentiles (usec): 00:32:55.753 | 1.00th=[ 3458], 5.00th=[ 7308], 10.00th=[ 8455], 20.00th=[10159], 00:32:55.753 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:32:55.753 | 70.00th=[11207], 80.00th=[11863], 90.00th=[12649], 95.00th=[15926], 00:32:55.753 | 99.00th=[17695], 99.50th=[20579], 99.90th=[22414], 99.95th=[22414], 00:32:55.753 | 99.99th=[22938] 00:32:55.753 bw ( KiB/s): min=22488, max=22568, per=30.68%, avg=22528.00, stdev=56.57, samples=2 00:32:55.753 iops : min= 5622, max= 5642, avg=5632.00, stdev=14.14, samples=2 00:32:55.753 lat (usec) : 1000=0.01% 00:32:55.753 lat (msec) : 2=0.17%, 4=0.46%, 10=17.37%, 20=80.37%, 50=1.61% 00:32:55.753 cpu : usr=3.59%, sys=5.69%, ctx=630, majf=0, minf=1 00:32:55.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:55.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:55.753 issued rwts: total=5359,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:55.753 job2: (groupid=0, jobs=1): err= 0: pid=1860345: Tue Dec 10 12:41:17 2024 00:32:55.753 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:32:55.753 slat (nsec): min=1242, max=26301k, avg=130177.83, stdev=998798.91 00:32:55.753 clat (usec): min=7074, max=73791, avg=17967.12, stdev=9962.60 00:32:55.753 lat (usec): min=7077, max=76914, avg=18097.30, stdev=10031.61 00:32:55.753 clat percentiles (usec): 00:32:55.753 | 1.00th=[ 8225], 5.00th=[10552], 10.00th=[11863], 20.00th=[12518], 00:32:55.753 | 30.00th=[13304], 40.00th=[14091], 50.00th=[15533], 60.00th=[16057], 00:32:55.753 | 70.00th=[17171], 80.00th=[19792], 90.00th=[26608], 95.00th=[38536], 00:32:55.753 | 99.00th=[66847], 99.50th=[70779], 99.90th=[73925], 99.95th=[73925], 00:32:55.753 | 99.99th=[73925] 00:32:55.753 write: IOPS=3622, BW=14.2MiB/s (14.8MB/s)(14.2MiB/1004msec); 0 zone resets 00:32:55.753 slat (usec): min=2, max=20357, avg=139.43, stdev=880.60 00:32:55.753 clat (usec): min=535, max=64750, avg=16538.60, stdev=10077.65 00:32:55.753 lat (usec): min=1757, max=64755, avg=16678.04, stdev=10141.17 00:32:55.753 clat percentiles (usec): 00:32:55.753 | 1.00th=[ 6063], 5.00th=[ 7373], 10.00th=[ 8094], 20.00th=[11076], 00:32:55.753 | 30.00th=[12256], 40.00th=[13566], 50.00th=[14353], 60.00th=[15664], 00:32:55.753 | 70.00th=[16188], 80.00th=[19006], 90.00th=[23725], 95.00th=[35390], 00:32:55.753 | 99.00th=[64750], 99.50th=[64750], 99.90th=[64750], 99.95th=[64750], 00:32:55.753 | 99.99th=[64750] 00:32:55.753 bw ( KiB/s): min=12288, max=16384, per=19.52%, avg=14336.00, stdev=2896.31, samples=2 00:32:55.753 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:32:55.753 lat (usec) : 750=0.01% 00:32:55.753 lat (msec) : 2=0.03%, 4=0.08%, 10=8.86%, 20=73.47%, 50=14.40% 00:32:55.753 lat (msec) : 100=3.14% 00:32:55.753 cpu : usr=2.99%, sys=4.79%, ctx=325, majf=0, minf=1 00:32:55.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:32:55.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:55.753 issued rwts: total=3584,3637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:55.753 job3: (groupid=0, jobs=1): err= 0: pid=1860346: Tue Dec 10 12:41:17 2024 00:32:55.753 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:32:55.753 slat (nsec): min=1481, max=15444k, avg=139634.27, stdev=1052936.23 00:32:55.753 clat (usec): min=4237, max=48996, avg=17002.33, stdev=6545.80 00:32:55.753 lat (usec): min=4243, max=49002, avg=17141.97, stdev=6629.26 00:32:55.753 clat percentiles (usec): 00:32:55.753 | 1.00th=[ 7111], 5.00th=[12256], 10.00th=[12780], 20.00th=[13304], 00:32:55.753 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14615], 60.00th=[15139], 00:32:55.753 | 70.00th=[16909], 80.00th=[19530], 90.00th=[27919], 95.00th=[30802], 00:32:55.753 | 99.00th=[41157], 99.50th=[44303], 99.90th=[49021], 99.95th=[49021], 00:32:55.753 | 99.99th=[49021] 00:32:55.753 write: IOPS=3736, BW=14.6MiB/s (15.3MB/s)(14.7MiB/1008msec); 0 zone resets 00:32:55.753 slat (usec): min=2, max=23821, avg=127.08, stdev=887.91 00:32:55.753 clat (usec): min=1538, max=48987, avg=17716.60, stdev=8946.41 00:32:55.753 lat (usec): min=1551, max=48993, avg=17843.68, stdev=9018.93 00:32:55.753 clat percentiles (usec): 00:32:55.753 | 1.00th=[ 4621], 5.00th=[ 7635], 10.00th=[ 9241], 20.00th=[11338], 00:32:55.753 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14615], 60.00th=[16909], 00:32:55.753 | 70.00th=[18482], 80.00th=[23462], 90.00th=[31065], 95.00th=[41681], 00:32:55.753 | 99.00th=[42206], 99.50th=[45351], 99.90th=[45876], 99.95th=[49021], 00:32:55.753 | 99.99th=[49021] 00:32:55.753 bw ( KiB/s): min=12728, max=16384, per=19.82%, avg=14556.00, stdev=2585.18, samples=2 00:32:55.753 iops : min= 3182, max= 4096, avg=3639.00, stdev=646.30, samples=2 00:32:55.753 lat (msec) : 2=0.03%, 4=0.19%, 10=7.89%, 20=68.50%, 50=23.39% 00:32:55.753 cpu : usr=3.38%, sys=4.67%, ctx=299, majf=0, minf=1 00:32:55.753 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:32:55.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:55.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:55.753 issued rwts: total=3584,3766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:55.753 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:55.753 00:32:55.753 Run status group 0 (all jobs): 00:32:55.753 READ: bw=68.4MiB/s (71.7MB/s), 13.9MiB/s-20.9MiB/s (14.6MB/s-21.9MB/s), io=68.9MiB (72.3MB), run=1003-1008msec 00:32:55.753 WRITE: bw=71.7MiB/s (75.2MB/s), 14.2MiB/s-21.9MiB/s (14.8MB/s-23.0MB/s), io=72.3MiB (75.8MB), run=1003-1008msec 00:32:55.753 00:32:55.753 Disk stats (read/write): 00:32:55.753 nvme0n1: ios=4204/4608, merge=0/0, ticks=43111/35526, in_queue=78637, util=85.57% 00:32:55.753 nvme0n2: ios=4647/4727, merge=0/0, ticks=35595/32188, in_queue=67783, util=96.75% 00:32:55.753 nvme0n3: ios=3093/3359, merge=0/0, ticks=34600/34961, in_queue=69561, util=98.12% 00:32:55.753 nvme0n4: ios=3116/3239, merge=0/0, ticks=50072/54366, in_queue=104438, util=99.69% 00:32:55.753 12:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:55.753 [global] 00:32:55.753 thread=1 00:32:55.753 invalidate=1 00:32:55.753 rw=randwrite 00:32:55.753 time_based=1 00:32:55.753 runtime=1 00:32:55.753 ioengine=libaio 00:32:55.753 direct=1 00:32:55.753 bs=4096 00:32:55.753 iodepth=128 00:32:55.753 norandommap=0 00:32:55.753 numjobs=1 00:32:55.753 00:32:55.753 verify_dump=1 00:32:55.753 verify_backlog=512 00:32:55.753 verify_state_save=0 00:32:55.753 do_verify=1 00:32:55.753 verify=crc32c-intel 00:32:55.753 [job0] 00:32:55.753 filename=/dev/nvme0n1 00:32:55.753 [job1] 00:32:55.753 filename=/dev/nvme0n2 00:32:55.753 [job2] 00:32:55.753 filename=/dev/nvme0n3 00:32:55.753 [job3] 00:32:55.753 filename=/dev/nvme0n4 00:32:55.753 Could not set queue depth (nvme0n1) 00:32:55.753 Could not set queue depth (nvme0n2) 00:32:55.753 Could not set queue depth (nvme0n3) 00:32:55.753 Could not set queue depth (nvme0n4) 00:32:56.010 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.010 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.010 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.010 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:56.010 fio-3.35 00:32:56.010 Starting 4 threads 00:32:57.382 00:32:57.382 job0: (groupid=0, jobs=1): err= 0: pid=1860713: Tue Dec 10 12:41:19 2024 00:32:57.382 read: IOPS=3730, BW=14.6MiB/s (15.3MB/s)(14.6MiB/1004msec) 00:32:57.382 slat (nsec): min=1483, max=46638k, avg=145880.18, stdev=1221825.06 00:32:57.382 clat (usec): min=3111, max=93100, avg=18449.96, stdev=14668.05 00:32:57.382 lat (usec): min=5943, max=93109, avg=18595.84, stdev=14760.31 00:32:57.382 clat percentiles (usec): 00:32:57.382 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 9503], 20.00th=[10159], 00:32:57.382 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11994], 60.00th=[14746], 00:32:57.382 | 70.00th=[20055], 80.00th=[25560], 90.00th=[31851], 95.00th=[41157], 00:32:57.382 | 99.00th=[87557], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:32:57.382 | 99.99th=[92799] 00:32:57.382 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:32:57.382 slat (usec): min=2, max=26700, avg=98.38, stdev=799.58 00:32:57.382 clat (usec): min=240, max=42815, avg=14217.62, stdev=7655.74 00:32:57.382 lat (usec): min=270, max=42819, avg=14316.00, stdev=7690.75 00:32:57.382 clat percentiles (usec): 00:32:57.382 | 1.00th=[ 2442], 5.00th=[ 4490], 10.00th=[ 7635], 20.00th=[ 8979], 00:32:57.382 | 30.00th=[ 9765], 40.00th=[10814], 50.00th=[11994], 60.00th=[12911], 00:32:57.382 | 70.00th=[16057], 80.00th=[19792], 90.00th=[25297], 95.00th=[32900], 00:32:57.382 | 99.00th=[36963], 99.50th=[38011], 99.90th=[42730], 99.95th=[42730], 00:32:57.382 | 99.99th=[42730] 00:32:57.382 bw ( KiB/s): min=16384, max=16384, per=26.24%, avg=16384.00, stdev= 0.00, samples=2 00:32:57.382 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:32:57.382 lat (usec) : 250=0.01%, 500=0.04%, 750=0.01% 00:32:57.382 lat (msec) : 2=0.29%, 4=2.22%, 10=21.13%, 20=53.31%, 50=21.36% 00:32:57.382 lat (msec) : 100=1.62% 00:32:57.382 cpu : usr=2.79%, sys=4.89%, ctx=259, majf=0, minf=1 00:32:57.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:57.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.382 issued rwts: total=3745,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.382 job1: (groupid=0, jobs=1): err= 0: pid=1860714: Tue Dec 10 12:41:19 2024 00:32:57.382 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:32:57.382 slat (nsec): min=1231, max=27167k, avg=133851.94, stdev=1203654.08 00:32:57.382 clat (usec): min=416, max=104496, avg=17828.84, stdev=15034.08 00:32:57.382 lat (usec): min=423, max=104507, avg=17962.69, stdev=15149.36 00:32:57.382 clat percentiles (usec): 00:32:57.382 | 1.00th=[ 807], 5.00th=[ 1565], 10.00th=[ 2638], 20.00th=[ 8455], 00:32:57.382 | 30.00th=[ 10028], 40.00th=[ 11338], 50.00th=[ 13698], 60.00th=[ 19792], 00:32:57.382 | 70.00th=[ 22414], 80.00th=[ 26346], 90.00th=[ 32113], 95.00th=[ 42206], 00:32:57.382 | 99.00th=[ 98042], 99.50th=[101188], 99.90th=[104334], 99.95th=[104334], 00:32:57.382 | 99.99th=[104334] 00:32:57.382 write: IOPS=3309, BW=12.9MiB/s (13.6MB/s)(13.1MiB/1012msec); 0 zone resets 00:32:57.382 slat (usec): min=2, max=22291, avg=158.63, stdev=944.56 00:32:57.382 clat (usec): min=400, max=104368, avg=21975.52, stdev=20405.01 00:32:57.382 lat (usec): min=868, max=104373, avg=22134.15, stdev=20501.99 00:32:57.382 clat percentiles (msec): 00:32:57.382 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 8], 20.00th=[ 11], 00:32:57.382 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 17], 00:32:57.382 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 53], 95.00th=[ 70], 00:32:57.382 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 104], 99.95th=[ 104], 00:32:57.382 | 99.99th=[ 105] 00:32:57.382 bw ( KiB/s): min=12288, max=13480, per=20.63%, avg=12884.00, stdev=842.87, samples=2 00:32:57.382 iops : min= 3072, max= 3370, avg=3221.00, stdev=210.72, samples=2 00:32:57.382 lat (usec) : 500=0.03%, 750=0.20%, 1000=1.06% 00:32:57.382 lat (msec) : 2=2.23%, 4=4.92%, 10=16.31%, 20=40.07%, 50=28.61% 00:32:57.382 lat (msec) : 100=5.90%, 250=0.67% 00:32:57.382 cpu : usr=2.27%, sys=3.46%, ctx=319, majf=0, minf=1 00:32:57.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:32:57.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.382 issued rwts: total=3072,3349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.382 job2: (groupid=0, jobs=1): err= 0: pid=1860715: Tue Dec 10 12:41:19 2024 00:32:57.382 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:32:57.382 slat (nsec): min=1383, max=21469k, avg=165896.09, stdev=1260421.31 00:32:57.382 clat (usec): min=1967, max=96911, avg=20790.55, stdev=12923.57 00:32:57.382 lat (usec): min=1978, max=96913, avg=20956.45, stdev=13040.10 00:32:57.382 clat percentiles (usec): 00:32:57.382 | 1.00th=[ 3916], 5.00th=[ 8979], 10.00th=[10552], 20.00th=[12387], 00:32:57.382 | 30.00th=[13566], 40.00th=[16057], 50.00th=[17433], 60.00th=[20841], 00:32:57.382 | 70.00th=[22414], 80.00th=[27657], 90.00th=[32900], 95.00th=[38536], 00:32:57.382 | 99.00th=[92799], 99.50th=[93848], 99.90th=[96994], 99.95th=[96994], 00:32:57.382 | 99.99th=[96994] 00:32:57.382 write: IOPS=3105, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1003msec); 0 zone resets 00:32:57.382 slat (usec): min=2, max=20042, avg=137.52, stdev=995.00 00:32:57.382 clat (usec): min=1154, max=96882, avg=20362.65, stdev=13048.92 00:32:57.382 lat (usec): min=1165, max=96886, avg=20500.17, stdev=13106.57 00:32:57.382 clat percentiles (usec): 00:32:57.382 | 1.00th=[ 3032], 5.00th=[ 7767], 10.00th=[ 9896], 20.00th=[12518], 00:32:57.382 | 30.00th=[14222], 40.00th=[15401], 50.00th=[16450], 60.00th=[17695], 00:32:57.382 | 70.00th=[20317], 80.00th=[25560], 90.00th=[36439], 95.00th=[47973], 00:32:57.382 | 99.00th=[78119], 99.50th=[84411], 99.90th=[92799], 99.95th=[92799], 00:32:57.382 | 99.99th=[96994] 00:32:57.382 bw ( KiB/s): min= 9976, max=14600, per=19.68%, avg=12288.00, stdev=3269.66, samples=2 00:32:57.382 iops : min= 2494, max= 3650, avg=3072.00, stdev=817.42, samples=2 00:32:57.382 lat (msec) : 2=0.29%, 4=1.26%, 10=8.89%, 20=54.21%, 50=31.79% 00:32:57.382 lat (msec) : 100=3.56% 00:32:57.382 cpu : usr=2.50%, sys=3.39%, ctx=217, majf=0, minf=2 00:32:57.382 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:32:57.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.382 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.382 issued rwts: total=3072,3115,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.382 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.382 job3: (groupid=0, jobs=1): err= 0: pid=1860716: Tue Dec 10 12:41:19 2024 00:32:57.382 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:32:57.382 slat (nsec): min=1382, max=15504k, avg=95875.42, stdev=776012.81 00:32:57.382 clat (usec): min=2192, max=38461, avg=12748.95, stdev=5384.86 00:32:57.382 lat (usec): min=2229, max=39038, avg=12844.83, stdev=5441.30 00:32:57.382 clat percentiles (usec): 00:32:57.382 | 1.00th=[ 4752], 5.00th=[ 7242], 10.00th=[ 8356], 20.00th=[ 9110], 00:32:57.382 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10814], 60.00th=[11731], 00:32:57.382 | 70.00th=[13566], 80.00th=[15664], 90.00th=[20317], 95.00th=[24249], 00:32:57.382 | 99.00th=[32637], 99.50th=[33424], 99.90th=[38536], 99.95th=[38536], 00:32:57.382 | 99.99th=[38536] 00:32:57.382 write: IOPS=5200, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1007msec); 0 zone resets 00:32:57.382 slat (usec): min=2, max=14936, avg=83.26, stdev=682.56 00:32:57.382 clat (usec): min=196, max=98722, avg=11873.63, stdev=12212.44 00:32:57.382 lat (usec): min=249, max=98731, avg=11956.90, stdev=12237.02 00:32:57.382 clat percentiles (usec): 00:32:57.382 | 1.00th=[ 848], 5.00th=[ 2057], 10.00th=[ 3785], 20.00th=[ 6587], 00:32:57.382 | 30.00th=[ 8160], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[10421], 00:32:57.382 | 70.00th=[12125], 80.00th=[13566], 90.00th=[17695], 95.00th=[22152], 00:32:57.382 | 99.00th=[94897], 99.50th=[99091], 99.90th=[99091], 99.95th=[99091], 00:32:57.383 | 99.99th=[99091] 00:32:57.383 bw ( KiB/s): min=16432, max=24576, per=32.84%, avg=20504.00, stdev=5758.68, samples=2 00:32:57.383 iops : min= 4108, max= 6144, avg=5126.00, stdev=1439.67, samples=2 00:32:57.383 lat (usec) : 250=0.03%, 500=0.11%, 750=0.19%, 1000=0.97% 00:32:57.383 lat (msec) : 2=0.80%, 4=3.50%, 10=41.60%, 20=43.32%, 50=8.57% 00:32:57.383 lat (msec) : 100=0.91% 00:32:57.383 cpu : usr=5.27%, sys=5.27%, ctx=349, majf=0, minf=1 00:32:57.383 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:57.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.383 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:57.383 issued rwts: total=5120,5237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.383 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:57.383 00:32:57.383 Run status group 0 (all jobs): 00:32:57.383 READ: bw=57.9MiB/s (60.7MB/s), 11.9MiB/s-19.9MiB/s (12.4MB/s-20.8MB/s), io=58.6MiB (61.5MB), run=1003-1012msec 00:32:57.383 WRITE: bw=61.0MiB/s (63.9MB/s), 12.1MiB/s-20.3MiB/s (12.7MB/s-21.3MB/s), io=61.7MiB (64.7MB), run=1003-1012msec 00:32:57.383 00:32:57.383 Disk stats (read/write): 00:32:57.383 nvme0n1: ios=3094/3518, merge=0/0, ticks=34234/26075, in_queue=60309, util=90.38% 00:32:57.383 nvme0n2: ios=2914/3072, merge=0/0, ticks=45751/48188, in_queue=93939, util=99.70% 00:32:57.383 nvme0n3: ios=2338/2560, merge=0/0, ticks=49674/48726, in_queue=98400, util=94.38% 00:32:57.383 nvme0n4: ios=4114/4489, merge=0/0, ticks=43896/43153, in_queue=87049, util=98.53% 00:32:57.383 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:57.383 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1860943 00:32:57.383 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:57.383 12:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:57.383 [global] 00:32:57.383 thread=1 00:32:57.383 invalidate=1 00:32:57.383 rw=read 00:32:57.383 time_based=1 00:32:57.383 runtime=10 00:32:57.383 ioengine=libaio 00:32:57.383 direct=1 00:32:57.383 bs=4096 00:32:57.383 iodepth=1 00:32:57.383 norandommap=1 00:32:57.383 numjobs=1 00:32:57.383 00:32:57.383 [job0] 00:32:57.383 filename=/dev/nvme0n1 00:32:57.383 [job1] 00:32:57.383 filename=/dev/nvme0n2 00:32:57.383 [job2] 00:32:57.383 filename=/dev/nvme0n3 00:32:57.383 [job3] 00:32:57.383 filename=/dev/nvme0n4 00:32:57.383 Could not set queue depth (nvme0n1) 00:32:57.383 Could not set queue depth (nvme0n2) 00:32:57.383 Could not set queue depth (nvme0n3) 00:32:57.383 Could not set queue depth (nvme0n4) 00:32:57.383 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.383 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.383 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.383 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:57.383 fio-3.35 00:32:57.383 Starting 4 threads 00:33:00.664 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:00.664 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:33:00.664 fio: pid=1861089, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:00.665 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:00.665 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=52363264, buflen=4096 00:33:00.665 fio: pid=1861088, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:00.665 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:00.665 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:00.665 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=43257856, buflen=4096 00:33:00.665 fio: pid=1861086, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:00.665 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:00.665 12:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:00.923 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=1097728, buflen=4096 00:33:00.923 fio: pid=1861087, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:33:00.923 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:00.923 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:00.923 00:33:00.923 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1861086: Tue Dec 10 12:41:23 2024 00:33:00.923 read: IOPS=3364, BW=13.1MiB/s (13.8MB/s)(41.3MiB/3139msec) 00:33:00.923 slat (usec): min=5, max=15570, avg= 9.68, stdev=179.04 00:33:00.923 clat (usec): min=179, max=41386, avg=283.80, stdev=1635.45 00:33:00.923 lat (usec): min=188, max=41394, avg=293.48, stdev=1645.71 00:33:00.923 clat percentiles (usec): 00:33:00.923 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:33:00.923 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 217], 00:33:00.923 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 251], 00:33:00.923 | 99.00th=[ 281], 99.50th=[ 330], 99.90th=[41157], 99.95th=[41157], 00:33:00.923 | 99.99th=[41157] 00:33:00.923 bw ( KiB/s): min= 96, max=17944, per=47.76%, avg=13461.50, stdev=6843.70, samples=6 00:33:00.923 iops : min= 24, max= 4486, avg=3365.33, stdev=1710.90, samples=6 00:33:00.923 lat (usec) : 250=94.61%, 500=5.18%, 750=0.03%, 1000=0.01% 00:33:00.923 lat (msec) : 50=0.16% 00:33:00.923 cpu : usr=0.76%, sys=3.35%, ctx=10564, majf=0, minf=2 00:33:00.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.923 issued rwts: total=10562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.923 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1861087: Tue Dec 10 12:41:23 2024 00:33:00.923 read: IOPS=80, BW=319KiB/s (327kB/s)(1072KiB/3361msec) 00:33:00.923 slat (usec): min=6, max=15698, avg=225.88, stdev=1625.69 00:33:00.923 clat (usec): min=188, max=42472, avg=12310.95, stdev=18642.78 00:33:00.923 lat (usec): min=195, max=56980, avg=12512.01, stdev=18791.37 00:33:00.923 clat percentiles (usec): 00:33:00.923 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 229], 20.00th=[ 249], 00:33:00.923 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:33:00.923 | 70.00th=[ 570], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:00.923 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:00.923 | 99.99th=[42730] 00:33:00.923 bw ( KiB/s): min= 264, max= 360, per=1.08%, avg=304.00, stdev=35.05, samples=6 00:33:00.923 iops : min= 66, max= 90, avg=76.00, stdev= 8.76, samples=6 00:33:00.923 lat (usec) : 250=21.19%, 500=48.33%, 750=0.37% 00:33:00.923 lat (msec) : 10=0.37%, 50=29.37% 00:33:00.923 cpu : usr=0.06%, sys=0.24%, ctx=273, majf=0, minf=2 00:33:00.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.923 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.923 issued rwts: total=269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.923 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1861088: Tue Dec 10 12:41:23 2024 00:33:00.923 read: IOPS=4372, BW=17.1MiB/s (17.9MB/s)(49.9MiB/2924msec) 00:33:00.923 slat (nsec): min=6287, max=31588, avg=7400.63, stdev=1085.10 00:33:00.923 clat (usec): min=178, max=10623, avg=218.61, stdev=95.10 00:33:00.923 lat (usec): min=191, max=10639, avg=226.01, stdev=95.20 00:33:00.923 clat percentiles (usec): 00:33:00.923 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 204], 00:33:00.923 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 221], 00:33:00.923 | 70.00th=[ 225], 80.00th=[ 227], 90.00th=[ 233], 95.00th=[ 243], 00:33:00.923 | 99.00th=[ 262], 99.50th=[ 285], 99.90th=[ 433], 99.95th=[ 627], 00:33:00.923 | 99.99th=[ 1647] 00:33:00.923 bw ( KiB/s): min=16400, max=19272, per=62.71%, avg=17672.00, stdev=1041.88, samples=5 00:33:00.923 iops : min= 4100, max= 4818, avg=4418.00, stdev=260.47, samples=5 00:33:00.923 lat (usec) : 250=97.68%, 500=2.25%, 750=0.03%, 1000=0.02% 00:33:00.923 lat (msec) : 2=0.01%, 20=0.01% 00:33:00.923 cpu : usr=0.96%, sys=4.17%, ctx=12786, majf=0, minf=2 00:33:00.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.923 issued rwts: total=12785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.923 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1861089: Tue Dec 10 12:41:23 2024 00:33:00.923 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2728msec) 00:33:00.923 slat (nsec): min=10421, max=38524, avg=22108.49, stdev=3639.67 00:33:00.923 clat (usec): min=393, max=41313, avg=40372.10, stdev=4958.51 00:33:00.923 lat (usec): min=431, max=41323, avg=40394.21, stdev=4956.46 00:33:00.923 clat percentiles (usec): 00:33:00.923 | 1.00th=[ 396], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:00.923 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:00.923 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:00.923 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:00.923 | 99.99th=[41157] 00:33:00.923 bw ( KiB/s): min= 96, max= 104, per=0.35%, avg=99.20, stdev= 4.38, samples=5 00:33:00.924 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:33:00.924 lat (usec) : 500=1.47% 00:33:00.924 lat (msec) : 50=97.06% 00:33:00.924 cpu : usr=0.11%, sys=0.00%, ctx=68, majf=0, minf=1 00:33:00.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:00.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.924 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:00.924 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:00.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:00.924 00:33:00.924 Run status group 0 (all jobs): 00:33:00.924 READ: bw=27.5MiB/s (28.9MB/s), 98.2KiB/s-17.1MiB/s (101kB/s-17.9MB/s), io=92.5MiB (97.0MB), run=2728-3361msec 00:33:00.924 00:33:00.924 Disk stats (read/write): 00:33:00.924 nvme0n1: ios=10537/0, merge=0/0, ticks=2935/0, in_queue=2935, util=95.01% 00:33:00.924 nvme0n2: ios=268/0, merge=0/0, ticks=3301/0, in_queue=3301, util=94.97% 00:33:00.924 nvme0n3: ios=12582/0, merge=0/0, ticks=2680/0, in_queue=2680, util=96.52% 00:33:00.924 nvme0n4: ios=64/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.45% 00:33:01.182 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.182 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:01.440 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.440 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:01.698 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.698 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:01.698 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:01.698 12:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:01.956 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:01.956 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1860943 00:33:01.956 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:01.956 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:02.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:02.214 nvmf hotplug test: fio failed as expected 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:02.214 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:02.473 rmmod nvme_tcp 00:33:02.473 rmmod nvme_fabrics 00:33:02.473 rmmod nvme_keyring 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1858325 ']' 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1858325 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1858325 ']' 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1858325 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1858325 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1858325' 00:33:02.473 killing process with pid 1858325 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1858325 00:33:02.473 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1858325 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:02.733 12:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.636 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:04.636 00:33:04.636 real 0m25.805s 00:33:04.636 user 1m30.137s 00:33:04.636 sys 0m11.222s 00:33:04.636 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:04.636 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:04.636 ************************************ 00:33:04.636 END TEST nvmf_fio_target 00:33:04.636 ************************************ 00:33:04.636 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:04.636 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:04.636 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:04.636 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:04.896 ************************************ 00:33:04.896 START TEST nvmf_bdevio 00:33:04.896 ************************************ 00:33:04.896 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:04.896 * Looking for test storage... 00:33:04.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:33:04.896 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:04.896 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:33:04.896 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:04.896 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:04.896 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:04.896 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:04.896 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:04.896 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:04.896 12:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.896 --rc genhtml_branch_coverage=1 00:33:04.896 --rc genhtml_function_coverage=1 00:33:04.896 --rc genhtml_legend=1 00:33:04.896 --rc geninfo_all_blocks=1 00:33:04.896 --rc geninfo_unexecuted_blocks=1 00:33:04.896 00:33:04.896 ' 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.896 --rc genhtml_branch_coverage=1 00:33:04.896 --rc genhtml_function_coverage=1 00:33:04.896 --rc genhtml_legend=1 00:33:04.896 --rc geninfo_all_blocks=1 00:33:04.896 --rc geninfo_unexecuted_blocks=1 00:33:04.896 00:33:04.896 ' 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.896 --rc genhtml_branch_coverage=1 00:33:04.896 --rc genhtml_function_coverage=1 00:33:04.896 --rc genhtml_legend=1 00:33:04.896 --rc geninfo_all_blocks=1 00:33:04.896 --rc geninfo_unexecuted_blocks=1 00:33:04.896 00:33:04.896 ' 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:04.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:04.896 --rc genhtml_branch_coverage=1 00:33:04.896 --rc genhtml_function_coverage=1 00:33:04.896 --rc genhtml_legend=1 00:33:04.896 --rc geninfo_all_blocks=1 00:33:04.896 --rc geninfo_unexecuted_blocks=1 00:33:04.896 00:33:04.896 ' 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.896 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:04.897 12:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:11.466 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:11.466 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:11.466 Found net devices under 0000:86:00.0: cvl_0_0 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.466 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:11.467 Found net devices under 0000:86:00.1: cvl_0_1 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:11.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:11.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:33:11.467 00:33:11.467 --- 10.0.0.2 ping statistics --- 00:33:11.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.467 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:11.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:11.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:33:11.467 00:33:11.467 --- 10.0.0.1 ping statistics --- 00:33:11.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:11.467 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1865331 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1865331 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1865331 ']' 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:11.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:11.467 12:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:11.467 [2024-12-10 12:41:32.976066] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:11.467 [2024-12-10 12:41:32.976978] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:33:11.467 [2024-12-10 12:41:32.977011] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.467 [2024-12-10 12:41:33.056072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:11.467 [2024-12-10 12:41:33.097067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.467 [2024-12-10 12:41:33.097105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.467 [2024-12-10 12:41:33.097112] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.467 [2024-12-10 12:41:33.097119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.467 [2024-12-10 12:41:33.097124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.467 [2024-12-10 12:41:33.098660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:11.467 [2024-12-10 12:41:33.098694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:11.467 [2024-12-10 12:41:33.098799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:11.467 [2024-12-10 12:41:33.098800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:11.467 [2024-12-10 12:41:33.166070] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:11.467 [2024-12-10 12:41:33.167189] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:11.467 [2024-12-10 12:41:33.167395] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:11.467 [2024-12-10 12:41:33.167807] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:11.467 [2024-12-10 12:41:33.167840] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:11.467 [2024-12-10 12:41:33.239652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.467 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:11.467 Malloc0 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:11.468 [2024-12-10 12:41:33.323877] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:11.468 { 00:33:11.468 "params": { 00:33:11.468 "name": "Nvme$subsystem", 00:33:11.468 "trtype": "$TEST_TRANSPORT", 00:33:11.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.468 "adrfam": "ipv4", 00:33:11.468 "trsvcid": "$NVMF_PORT", 00:33:11.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.468 "hdgst": ${hdgst:-false}, 00:33:11.468 "ddgst": ${ddgst:-false} 00:33:11.468 }, 00:33:11.468 "method": "bdev_nvme_attach_controller" 00:33:11.468 } 00:33:11.468 EOF 00:33:11.468 )") 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:11.468 12:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:11.468 "params": { 00:33:11.468 "name": "Nvme1", 00:33:11.468 "trtype": "tcp", 00:33:11.468 "traddr": "10.0.0.2", 00:33:11.468 "adrfam": "ipv4", 00:33:11.468 "trsvcid": "4420", 00:33:11.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:11.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:11.468 "hdgst": false, 00:33:11.468 "ddgst": false 00:33:11.468 }, 00:33:11.468 "method": "bdev_nvme_attach_controller" 00:33:11.468 }' 00:33:11.468 [2024-12-10 12:41:33.377050] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:33:11.468 [2024-12-10 12:41:33.377103] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1865432 ] 00:33:11.468 [2024-12-10 12:41:33.456653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:11.468 [2024-12-10 12:41:33.500249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.468 [2024-12-10 12:41:33.500141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.468 [2024-12-10 12:41:33.500249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:11.726 I/O targets: 00:33:11.726 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:11.726 00:33:11.726 00:33:11.726 CUnit - A unit testing framework for C - Version 2.1-3 00:33:11.726 http://cunit.sourceforge.net/ 00:33:11.726 00:33:11.726 00:33:11.726 Suite: bdevio tests on: Nvme1n1 00:33:11.726 Test: blockdev write read block ...passed 00:33:11.984 Test: blockdev write zeroes read block ...passed 00:33:11.984 Test: blockdev write zeroes read no split ...passed 00:33:11.984 Test: blockdev write zeroes read split ...passed 00:33:11.984 Test: blockdev write zeroes read split partial ...passed 00:33:11.984 Test: blockdev reset ...[2024-12-10 12:41:33.927847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:11.984 [2024-12-10 12:41:33.927912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe2c050 (9): Bad file descriptor 00:33:11.984 [2024-12-10 12:41:33.979526] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:11.984 passed 00:33:11.984 Test: blockdev write read 8 blocks ...passed 00:33:11.984 Test: blockdev write read size > 128k ...passed 00:33:11.984 Test: blockdev write read invalid size ...passed 00:33:11.984 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:11.984 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:11.984 Test: blockdev write read max offset ...passed 00:33:11.984 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:12.243 Test: blockdev writev readv 8 blocks ...passed 00:33:12.243 Test: blockdev writev readv 30 x 1block ...passed 00:33:12.243 Test: blockdev writev readv block ...passed 00:33:12.243 Test: blockdev writev readv size > 128k ...passed 00:33:12.243 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:12.243 Test: blockdev comparev and writev ...[2024-12-10 12:41:34.272241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.243 [2024-12-10 12:41:34.272281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:12.243 [2024-12-10 12:41:34.272298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.243 [2024-12-10 12:41:34.272307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:12.243 [2024-12-10 12:41:34.272608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.243 [2024-12-10 12:41:34.272620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:12.243 [2024-12-10 12:41:34.272632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.243 [2024-12-10 12:41:34.272640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:12.243 [2024-12-10 12:41:34.272934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.243 [2024-12-10 12:41:34.272945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:12.243 [2024-12-10 12:41:34.272957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.243 [2024-12-10 12:41:34.272964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:12.243 [2024-12-10 12:41:34.273257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.243 [2024-12-10 12:41:34.273270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:12.243 [2024-12-10 12:41:34.273282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:12.243 [2024-12-10 12:41:34.273291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:12.243 passed 00:33:12.243 Test: blockdev nvme passthru rw ...passed 00:33:12.243 Test: blockdev nvme passthru vendor specific ...[2024-12-10 12:41:34.355543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:12.244 [2024-12-10 12:41:34.355560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:12.244 [2024-12-10 12:41:34.355670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:12.244 [2024-12-10 12:41:34.355681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:12.244 [2024-12-10 12:41:34.355789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:12.244 [2024-12-10 12:41:34.355799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:12.244 [2024-12-10 12:41:34.355907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:12.244 [2024-12-10 12:41:34.355918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:12.244 passed 00:33:12.244 Test: blockdev nvme admin passthru ...passed 00:33:12.503 Test: blockdev copy ...passed 00:33:12.503 00:33:12.503 Run Summary: Type Total Ran Passed Failed Inactive 00:33:12.503 suites 1 1 n/a 0 0 00:33:12.503 tests 23 23 23 0 0 00:33:12.503 asserts 152 152 152 0 n/a 00:33:12.503 00:33:12.503 Elapsed time = 1.191 seconds 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:12.503 rmmod nvme_tcp 00:33:12.503 rmmod nvme_fabrics 00:33:12.503 rmmod nvme_keyring 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1865331 ']' 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1865331 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1865331 ']' 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1865331 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:12.503 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1865331 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1865331' 00:33:12.762 killing process with pid 1865331 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1865331 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1865331 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:12.762 12:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.298 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:15.298 00:33:15.298 real 0m10.091s 00:33:15.298 user 0m9.801s 00:33:15.298 sys 0m5.196s 00:33:15.298 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.298 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:15.298 ************************************ 00:33:15.298 END TEST nvmf_bdevio 00:33:15.298 ************************************ 00:33:15.298 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:15.298 00:33:15.298 real 4m33.824s 00:33:15.298 user 9m8.638s 00:33:15.298 sys 1m50.995s 00:33:15.298 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:15.298 12:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:15.298 ************************************ 00:33:15.298 END TEST nvmf_target_core_interrupt_mode 00:33:15.298 ************************************ 00:33:15.298 12:41:36 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:15.298 12:41:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:15.298 12:41:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:15.298 12:41:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:15.298 ************************************ 00:33:15.298 START TEST nvmf_interrupt 00:33:15.298 ************************************ 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:15.299 * Looking for test storage... 00:33:15.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:15.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.299 --rc genhtml_branch_coverage=1 00:33:15.299 --rc genhtml_function_coverage=1 00:33:15.299 --rc genhtml_legend=1 00:33:15.299 --rc geninfo_all_blocks=1 00:33:15.299 --rc geninfo_unexecuted_blocks=1 00:33:15.299 00:33:15.299 ' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:15.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.299 --rc genhtml_branch_coverage=1 00:33:15.299 --rc genhtml_function_coverage=1 00:33:15.299 --rc genhtml_legend=1 00:33:15.299 --rc geninfo_all_blocks=1 00:33:15.299 --rc geninfo_unexecuted_blocks=1 00:33:15.299 00:33:15.299 ' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:15.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.299 --rc genhtml_branch_coverage=1 00:33:15.299 --rc genhtml_function_coverage=1 00:33:15.299 --rc genhtml_legend=1 00:33:15.299 --rc geninfo_all_blocks=1 00:33:15.299 --rc geninfo_unexecuted_blocks=1 00:33:15.299 00:33:15.299 ' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:15.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:15.299 --rc genhtml_branch_coverage=1 00:33:15.299 --rc genhtml_function_coverage=1 00:33:15.299 --rc genhtml_legend=1 00:33:15.299 --rc geninfo_all_blocks=1 00:33:15.299 --rc geninfo_unexecuted_blocks=1 00:33:15.299 00:33:15.299 ' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/interrupt/common.sh 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:15.299 12:41:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:21.871 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:21.871 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:21.871 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:21.872 Found net devices under 0000:86:00.0: cvl_0_0 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:21.872 Found net devices under 0000:86:00.1: cvl_0_1 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:21.872 12:41:42 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:21.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:21.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:33:21.872 00:33:21.872 --- 10.0.0.2 ping statistics --- 00:33:21.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.872 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:21.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:21.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:33:21.872 00:33:21.872 --- 10.0.0.1 ping statistics --- 00:33:21.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:21.872 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1869122 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1869122 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1869122 ']' 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:21.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:21.872 12:41:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:21.872 [2024-12-10 12:41:43.275328] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:21.872 [2024-12-10 12:41:43.276395] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:33:21.872 [2024-12-10 12:41:43.276440] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:21.872 [2024-12-10 12:41:43.357823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:21.872 [2024-12-10 12:41:43.399394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:21.872 [2024-12-10 12:41:43.399434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:21.872 [2024-12-10 12:41:43.399442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:21.872 [2024-12-10 12:41:43.399450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:21.872 [2024-12-10 12:41:43.399456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:21.872 [2024-12-10 12:41:43.404177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.872 [2024-12-10 12:41:43.404181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:21.872 [2024-12-10 12:41:43.473068] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:21.872 [2024-12-10 12:41:43.473128] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:21.872 [2024-12-10 12:41:43.473264] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:22.131 5000+0 records in 00:33:22.131 5000+0 records out 00:33:22.131 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0170328 s, 601 MB/s 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.131 AIO0 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.131 [2024-12-10 12:41:44.216862] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.131 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.131 [2024-12-10 12:41:44.253250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1869122 0 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1869122 0 idle 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1869122 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1869122 -w 256 00:33:22.132 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1869122 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.26 reactor_0' 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1869122 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.26 reactor_0 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1869122 1 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1869122 1 idle 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1869122 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1869122 -w 256 00:33:22.391 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1869133 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1' 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1869133 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1869388 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1869122 0 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1869122 0 busy 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1869122 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1869122 -w 256 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1869122 root 20 0 128.2g 46848 33792 R 62.5 0.0 0:00.36 reactor_0' 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1869122 root 20 0 128.2g 46848 33792 R 62.5 0.0 0:00.36 reactor_0 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=62.5 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=62 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1869122 1 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1869122 1 busy 00:33:22.650 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1869122 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1869122 -w 256 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1869133 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.23 reactor_1' 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1869133 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.23 reactor_1 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:22.909 12:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1869388 00:33:33.005 Initializing NVMe Controllers 00:33:33.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:33.005 Controller IO queue size 256, less than required. 00:33:33.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:33.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:33.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:33.005 Initialization complete. Launching workers. 00:33:33.005 ======================================================== 00:33:33.005 Latency(us) 00:33:33.005 Device Information : IOPS MiB/s Average min max 00:33:33.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15954.60 62.32 16054.26 3010.92 29550.33 00:33:33.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16218.40 63.35 15789.84 7577.33 26837.96 00:33:33.005 ======================================================== 00:33:33.005 Total : 32173.00 125.68 15920.97 3010.92 29550.33 00:33:33.005 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1869122 0 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1869122 0 idle 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1869122 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:33.005 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1869122 -w 256 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1869122 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0' 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1869122 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1869122 1 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1869122 1 idle 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1869122 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1869122 -w 256 00:33:33.006 12:41:54 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:33.006 12:41:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1869133 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:33:33.006 12:41:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1869133 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:33:33.006 12:41:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:33.006 12:41:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:33.265 12:41:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:33.265 12:41:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:33.265 12:41:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:33.265 12:41:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:33.265 12:41:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:33.265 12:41:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:33.265 12:41:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:33.523 12:41:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:33.523 12:41:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:33.523 12:41:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:33.523 12:41:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:33.524 12:41:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1869122 0 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1869122 0 idle 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1869122 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:35.428 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:35.429 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:35.429 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:35.429 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:35.429 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:35.429 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:35.429 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:35.429 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:35.429 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1869122 -w 256 00:33:35.429 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1869122 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.50 reactor_0' 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1869122 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.50 reactor_0 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1869122 1 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1869122 1 idle 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1869122 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1869122 -w 256 00:33:35.688 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1869133 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1' 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1869133 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:35.948 12:41:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:35.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:35.948 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:35.948 rmmod nvme_tcp 00:33:35.948 rmmod nvme_fabrics 00:33:36.207 rmmod nvme_keyring 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1869122 ']' 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1869122 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1869122 ']' 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1869122 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1869122 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1869122' 00:33:36.207 killing process with pid 1869122 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1869122 00:33:36.207 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1869122 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:36.467 12:41:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:38.372 12:42:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:38.372 00:33:38.372 real 0m23.434s 00:33:38.372 user 0m39.845s 00:33:38.372 sys 0m8.364s 00:33:38.372 12:42:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.372 12:42:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:38.372 ************************************ 00:33:38.372 END TEST nvmf_interrupt 00:33:38.372 ************************************ 00:33:38.372 00:33:38.372 real 27m25.811s 00:33:38.372 user 56m22.746s 00:33:38.372 sys 9m17.045s 00:33:38.372 12:42:00 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.372 12:42:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.372 ************************************ 00:33:38.372 END TEST nvmf_tcp 00:33:38.372 ************************************ 00:33:38.631 12:42:00 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:38.631 12:42:00 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:38.631 12:42:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:38.631 12:42:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.631 12:42:00 -- common/autotest_common.sh@10 -- # set +x 00:33:38.631 ************************************ 00:33:38.631 START TEST spdkcli_nvmf_tcp 00:33:38.631 ************************************ 00:33:38.631 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:38.631 * Looking for test storage... 00:33:38.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli 00:33:38.631 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:38.631 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:33:38.631 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:38.631 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:38.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.632 --rc genhtml_branch_coverage=1 00:33:38.632 --rc genhtml_function_coverage=1 00:33:38.632 --rc genhtml_legend=1 00:33:38.632 --rc geninfo_all_blocks=1 00:33:38.632 --rc geninfo_unexecuted_blocks=1 00:33:38.632 00:33:38.632 ' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:38.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.632 --rc genhtml_branch_coverage=1 00:33:38.632 --rc genhtml_function_coverage=1 00:33:38.632 --rc genhtml_legend=1 00:33:38.632 --rc geninfo_all_blocks=1 00:33:38.632 --rc geninfo_unexecuted_blocks=1 00:33:38.632 00:33:38.632 ' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:38.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.632 --rc genhtml_branch_coverage=1 00:33:38.632 --rc genhtml_function_coverage=1 00:33:38.632 --rc genhtml_legend=1 00:33:38.632 --rc geninfo_all_blocks=1 00:33:38.632 --rc geninfo_unexecuted_blocks=1 00:33:38.632 00:33:38.632 ' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:38.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:38.632 --rc genhtml_branch_coverage=1 00:33:38.632 --rc genhtml_function_coverage=1 00:33:38.632 --rc genhtml_legend=1 00:33:38.632 --rc geninfo_all_blocks=1 00:33:38.632 --rc geninfo_unexecuted_blocks=1 00:33:38.632 00:33:38.632 ' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/common.sh 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:38.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1872133 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1872133 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1872133 ']' 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.632 12:42:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.891 [2024-12-10 12:42:00.831279] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:33:38.891 [2024-12-10 12:42:00.831331] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872133 ] 00:33:38.891 [2024-12-10 12:42:00.905067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:38.891 [2024-12-10 12:42:00.950526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.891 [2024-12-10 12:42:00.950529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.891 12:42:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.891 12:42:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:38.891 12:42:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:38.891 12:42:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:38.891 12:42:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.149 12:42:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:39.149 12:42:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:39.149 12:42:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:39.149 12:42:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:39.149 12:42:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.149 12:42:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:39.149 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:39.149 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:39.149 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:39.149 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:39.149 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:39.149 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:39.149 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:39.149 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:39.149 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:39.149 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.149 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.149 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:39.149 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.149 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.149 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:39.149 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:39.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:39.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:39.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:39.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:39.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:39.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:39.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:39.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:39.150 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:39.150 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:39.150 ' 00:33:41.678 [2024-12-10 12:42:03.772427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.055 [2024-12-10 12:42:05.108878] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:45.585 [2024-12-10 12:42:07.612760] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:48.112 [2024-12-10 12:42:09.787694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:49.487 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:49.487 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:49.487 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:49.487 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:49.487 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:49.487 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:49.487 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:49.487 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.487 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.487 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:49.487 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:49.487 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:49.487 12:42:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:49.487 12:42:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.487 12:42:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.487 12:42:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:49.487 12:42:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.487 12:42:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:49.487 12:42:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:49.487 12:42:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdkcli.py ll /nvmf 00:33:50.054 12:42:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:50.054 12:42:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:50.054 12:42:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:50.054 12:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:50.054 12:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:50.054 12:42:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:50.054 12:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:50.054 12:42:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:50.054 12:42:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:50.054 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:50.054 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:50.054 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:50.054 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:50.054 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:50.054 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:50.054 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:50.054 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:50.054 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:50.054 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:50.054 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:50.054 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:50.054 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:50.054 ' 00:33:56.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:56.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:56.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:56.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:56.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:56.613 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:56.613 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:56.613 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:56.613 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:56.613 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:56.613 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:56.613 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:56.613 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:56.613 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1872133 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1872133 ']' 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1872133 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1872133 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1872133' 00:33:56.613 killing process with pid 1872133 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1872133 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1872133 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1872133 ']' 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1872133 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1872133 ']' 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1872133 00:33:56.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (1872133) - No such process 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1872133 is not found' 00:33:56.613 Process with pid 1872133 is not found 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:56.613 00:33:56.613 real 0m17.359s 00:33:56.613 user 0m38.299s 00:33:56.613 sys 0m0.793s 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.613 12:42:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:56.613 ************************************ 00:33:56.613 END TEST spdkcli_nvmf_tcp 00:33:56.613 ************************************ 00:33:56.613 12:42:17 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:56.613 12:42:17 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:56.613 12:42:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:56.613 12:42:17 -- common/autotest_common.sh@10 -- # set +x 00:33:56.613 ************************************ 00:33:56.613 START TEST nvmf_identify_passthru 00:33:56.613 ************************************ 00:33:56.613 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:56.613 * Looking for test storage... 00:33:56.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:33:56.613 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:56.613 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:33:56.613 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:56.613 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:56.613 12:42:18 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:56.613 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:56.613 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:56.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.613 --rc genhtml_branch_coverage=1 00:33:56.613 --rc genhtml_function_coverage=1 00:33:56.613 --rc genhtml_legend=1 00:33:56.613 --rc geninfo_all_blocks=1 00:33:56.613 --rc geninfo_unexecuted_blocks=1 00:33:56.613 00:33:56.613 ' 00:33:56.613 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:56.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.613 --rc genhtml_branch_coverage=1 00:33:56.613 --rc genhtml_function_coverage=1 00:33:56.613 --rc genhtml_legend=1 00:33:56.613 --rc geninfo_all_blocks=1 00:33:56.613 --rc geninfo_unexecuted_blocks=1 00:33:56.613 00:33:56.613 ' 00:33:56.613 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:56.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.613 --rc genhtml_branch_coverage=1 00:33:56.613 --rc genhtml_function_coverage=1 00:33:56.613 --rc genhtml_legend=1 00:33:56.613 --rc geninfo_all_blocks=1 00:33:56.613 --rc geninfo_unexecuted_blocks=1 00:33:56.613 00:33:56.613 ' 00:33:56.613 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:56.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:56.613 --rc genhtml_branch_coverage=1 00:33:56.613 --rc genhtml_function_coverage=1 00:33:56.613 --rc genhtml_legend=1 00:33:56.613 --rc geninfo_all_blocks=1 00:33:56.613 --rc geninfo_unexecuted_blocks=1 00:33:56.613 00:33:56.613 ' 00:33:56.613 12:42:18 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:56.613 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:56.614 12:42:18 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.614 12:42:18 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.614 12:42:18 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.614 12:42:18 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.614 12:42:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.614 12:42:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.614 12:42:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.614 12:42:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:56.614 12:42:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:56.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.614 12:42:18 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:56.614 12:42:18 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.614 12:42:18 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.614 12:42:18 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.614 12:42:18 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.614 12:42:18 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.614 12:42:18 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.614 12:42:18 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.614 12:42:18 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:56.614 12:42:18 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.614 12:42:18 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.614 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:56.614 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:56.614 12:42:18 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.614 12:42:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:01.885 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:01.885 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:01.885 Found net devices under 0000:86:00.0: cvl_0_0 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:01.885 Found net devices under 0000:86:00.1: cvl_0_1 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.885 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:01.886 12:42:23 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:02.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:02.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:34:02.145 00:34:02.145 --- 10.0.0.2 ping statistics --- 00:34:02.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.145 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:02.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:02.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:34:02.145 00:34:02.145 --- 10.0.0.1 ping statistics --- 00:34:02.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:02.145 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:02.145 12:42:24 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:02.145 12:42:24 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:02.145 12:42:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:34:02.145 12:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:34:02.404 12:42:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:34:02.404 12:42:24 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:34:02.404 12:42:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:02.404 12:42:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:02.404 12:42:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:06.593 12:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:34:06.593 12:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:34:06.593 12:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:06.593 12:42:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:10.781 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:10.781 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.781 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.781 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1879852 00:34:10.781 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:10.781 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:10.781 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1879852 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1879852 ']' 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.781 [2024-12-10 12:42:32.751401] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:34:10.781 [2024-12-10 12:42:32.751450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.781 [2024-12-10 12:42:32.831592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:10.781 [2024-12-10 12:42:32.874821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.781 [2024-12-10 12:42:32.874856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.781 [2024-12-10 12:42:32.874864] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.781 [2024-12-10 12:42:32.874873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.781 [2024-12-10 12:42:32.874878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.781 [2024-12-10 12:42:32.876425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:10.781 [2024-12-10 12:42:32.876533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:10.781 [2024-12-10 12:42:32.876643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.781 [2024-12-10 12:42:32.876643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:10.781 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:10.781 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.782 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.782 INFO: Log level set to 20 00:34:10.782 INFO: Requests: 00:34:10.782 { 00:34:10.782 "jsonrpc": "2.0", 00:34:10.782 "method": "nvmf_set_config", 00:34:10.782 "id": 1, 00:34:10.782 "params": { 00:34:10.782 "admin_cmd_passthru": { 00:34:10.782 "identify_ctrlr": true 00:34:10.782 } 00:34:10.782 } 00:34:10.782 } 00:34:10.782 00:34:10.782 INFO: response: 00:34:10.782 { 00:34:10.782 "jsonrpc": "2.0", 00:34:10.782 "id": 1, 00:34:10.782 "result": true 00:34:10.782 } 00:34:10.782 00:34:10.782 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.782 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:10.782 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.782 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:10.782 INFO: Setting log level to 20 00:34:10.782 INFO: Setting log level to 20 00:34:10.782 INFO: Log level set to 20 00:34:10.782 INFO: Log level set to 20 00:34:10.782 INFO: Requests: 00:34:10.782 { 00:34:10.782 "jsonrpc": "2.0", 00:34:10.782 "method": "framework_start_init", 00:34:10.782 "id": 1 00:34:10.782 } 00:34:10.782 00:34:10.782 INFO: Requests: 00:34:10.782 { 00:34:10.782 "jsonrpc": "2.0", 00:34:10.782 "method": "framework_start_init", 00:34:10.782 "id": 1 00:34:10.782 } 00:34:10.782 00:34:11.040 [2024-12-10 12:42:32.980640] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:11.040 INFO: response: 00:34:11.040 { 00:34:11.040 "jsonrpc": "2.0", 00:34:11.040 "id": 1, 00:34:11.040 "result": true 00:34:11.040 } 00:34:11.040 00:34:11.040 INFO: response: 00:34:11.040 { 00:34:11.040 "jsonrpc": "2.0", 00:34:11.040 "id": 1, 00:34:11.040 "result": true 00:34:11.040 } 00:34:11.040 00:34:11.040 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.040 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:11.040 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.040 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.040 INFO: Setting log level to 40 00:34:11.040 INFO: Setting log level to 40 00:34:11.040 INFO: Setting log level to 40 00:34:11.040 [2024-12-10 12:42:32.993929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.040 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.040 12:42:32 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:11.040 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.040 12:42:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:11.040 12:42:33 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:34:11.040 12:42:33 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.040 12:42:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.318 Nvme0n1 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.318 12:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.318 12:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.318 12:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.318 [2024-12-10 12:42:35.900722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.318 12:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.318 [ 00:34:14.318 { 00:34:14.318 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:14.318 "subtype": "Discovery", 00:34:14.318 "listen_addresses": [], 00:34:14.318 "allow_any_host": true, 00:34:14.318 "hosts": [] 00:34:14.318 }, 00:34:14.318 { 00:34:14.318 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:14.318 "subtype": "NVMe", 00:34:14.318 "listen_addresses": [ 00:34:14.318 { 00:34:14.318 "trtype": "TCP", 00:34:14.318 "adrfam": "IPv4", 00:34:14.318 "traddr": "10.0.0.2", 00:34:14.318 "trsvcid": "4420" 00:34:14.318 } 00:34:14.318 ], 00:34:14.318 "allow_any_host": true, 00:34:14.318 "hosts": [], 00:34:14.318 "serial_number": "SPDK00000000000001", 00:34:14.318 "model_number": "SPDK bdev Controller", 00:34:14.318 "max_namespaces": 1, 00:34:14.318 "min_cntlid": 1, 00:34:14.318 "max_cntlid": 65519, 00:34:14.318 "namespaces": [ 00:34:14.318 { 00:34:14.318 "nsid": 1, 00:34:14.318 "bdev_name": "Nvme0n1", 00:34:14.318 "name": "Nvme0n1", 00:34:14.318 "nguid": "BE849A51C4984A48BF0546C8AA024E7D", 00:34:14.318 "uuid": "be849a51-c498-4a48-bf05-46c8aa024e7d" 00:34:14.318 } 00:34:14.318 ] 00:34:14.318 } 00:34:14.318 ] 00:34:14.318 12:42:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.318 12:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:14.318 12:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:14.318 12:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:14.318 12:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:34:14.318 12:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:14.318 12:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:14.318 12:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:14.318 12:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:14.318 12:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:34:14.318 12:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:14.318 12:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:14.318 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.318 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:14.318 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.318 12:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:14.318 12:42:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:14.318 rmmod nvme_tcp 00:34:14.318 rmmod nvme_fabrics 00:34:14.318 rmmod nvme_keyring 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1879852 ']' 00:34:14.318 12:42:36 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1879852 00:34:14.318 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1879852 ']' 00:34:14.318 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1879852 00:34:14.318 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:14.318 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:14.318 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1879852 00:34:14.575 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:14.575 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:14.575 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1879852' 00:34:14.575 killing process with pid 1879852 00:34:14.575 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1879852 00:34:14.575 12:42:36 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1879852 00:34:15.947 12:42:37 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:15.947 12:42:37 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.947 12:42:37 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.947 12:42:37 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:15.947 12:42:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:15.947 12:42:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.947 12:42:37 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.947 12:42:37 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.947 12:42:37 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.947 12:42:37 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.947 12:42:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:15.947 12:42:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.481 12:42:40 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:18.481 00:34:18.481 real 0m22.046s 00:34:18.481 user 0m27.130s 00:34:18.481 sys 0m6.187s 00:34:18.481 12:42:40 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:18.481 12:42:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.481 ************************************ 00:34:18.481 END TEST nvmf_identify_passthru 00:34:18.481 ************************************ 00:34:18.481 12:42:40 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/dif.sh 00:34:18.481 12:42:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:18.481 12:42:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:18.481 12:42:40 -- common/autotest_common.sh@10 -- # set +x 00:34:18.481 ************************************ 00:34:18.481 START TEST nvmf_dif 00:34:18.481 ************************************ 00:34:18.481 12:42:40 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/dif.sh 00:34:18.481 * Looking for test storage... 00:34:18.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:34:18.481 12:42:40 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:18.481 12:42:40 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:34:18.481 12:42:40 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:18.481 12:42:40 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:18.481 12:42:40 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:18.481 12:42:40 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:18.481 12:42:40 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:18.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.481 --rc genhtml_branch_coverage=1 00:34:18.481 --rc genhtml_function_coverage=1 00:34:18.481 --rc genhtml_legend=1 00:34:18.481 --rc geninfo_all_blocks=1 00:34:18.481 --rc geninfo_unexecuted_blocks=1 00:34:18.481 00:34:18.481 ' 00:34:18.481 12:42:40 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:18.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.481 --rc genhtml_branch_coverage=1 00:34:18.481 --rc genhtml_function_coverage=1 00:34:18.481 --rc genhtml_legend=1 00:34:18.481 --rc geninfo_all_blocks=1 00:34:18.481 --rc geninfo_unexecuted_blocks=1 00:34:18.481 00:34:18.481 ' 00:34:18.481 12:42:40 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:18.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.481 --rc genhtml_branch_coverage=1 00:34:18.481 --rc genhtml_function_coverage=1 00:34:18.481 --rc genhtml_legend=1 00:34:18.481 --rc geninfo_all_blocks=1 00:34:18.481 --rc geninfo_unexecuted_blocks=1 00:34:18.481 00:34:18.481 ' 00:34:18.481 12:42:40 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:18.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.482 --rc genhtml_branch_coverage=1 00:34:18.482 --rc genhtml_function_coverage=1 00:34:18.482 --rc genhtml_legend=1 00:34:18.482 --rc geninfo_all_blocks=1 00:34:18.482 --rc geninfo_unexecuted_blocks=1 00:34:18.482 00:34:18.482 ' 00:34:18.482 12:42:40 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:34:18.482 12:42:40 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:18.482 12:42:40 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.482 12:42:40 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.482 12:42:40 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.482 12:42:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.482 12:42:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.482 12:42:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.482 12:42:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:18.482 12:42:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:18.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:18.482 12:42:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:18.482 12:42:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:18.482 12:42:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:18.482 12:42:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:18.482 12:42:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.482 12:42:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:18.482 12:42:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:18.482 12:42:40 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:18.482 12:42:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.755 12:42:45 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:24.013 12:42:45 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:24.013 12:42:45 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:24.013 12:42:45 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:24.013 12:42:45 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:24.013 12:42:45 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:24.013 12:42:45 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:24.013 12:42:45 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:24.013 12:42:45 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:24.013 12:42:45 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:24.014 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:24.014 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:24.014 Found net devices under 0000:86:00.0: cvl_0_0 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:24.014 Found net devices under 0000:86:00.1: cvl_0_1 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:24.014 12:42:45 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:24.014 12:42:46 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:24.014 12:42:46 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:24.014 12:42:46 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:24.014 12:42:46 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:24.272 12:42:46 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:24.272 12:42:46 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:24.272 12:42:46 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:24.272 12:42:46 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:24.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:24.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:34:24.273 00:34:24.273 --- 10.0.0.2 ping statistics --- 00:34:24.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.273 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:34:24.273 12:42:46 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:24.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:24.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:34:24.273 00:34:24.273 --- 10.0.0.1 ping statistics --- 00:34:24.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.273 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:34:24.273 12:42:46 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:24.273 12:42:46 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:24.273 12:42:46 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:24.273 12:42:46 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:34:26.804 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:26.804 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:26.804 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:26.805 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:27.063 12:42:49 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:27.063 12:42:49 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:27.063 12:42:49 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:27.063 12:42:49 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:27.063 12:42:49 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:27.063 12:42:49 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:27.063 12:42:49 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:27.063 12:42:49 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:27.063 12:42:49 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:27.063 12:42:49 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:27.063 12:42:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.063 12:42:49 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1885415 00:34:27.063 12:42:49 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1885415 00:34:27.063 12:42:49 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:27.063 12:42:49 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1885415 ']' 00:34:27.063 12:42:49 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:27.063 12:42:49 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:27.063 12:42:49 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:27.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:27.063 12:42:49 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:27.063 12:42:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.063 [2024-12-10 12:42:49.174022] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:34:27.063 [2024-12-10 12:42:49.174069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:27.322 [2024-12-10 12:42:49.253477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.322 [2024-12-10 12:42:49.294278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:27.322 [2024-12-10 12:42:49.294313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:27.322 [2024-12-10 12:42:49.294321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:27.322 [2024-12-10 12:42:49.294327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:27.322 [2024-12-10 12:42:49.294335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:27.322 [2024-12-10 12:42:49.294872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.322 12:42:49 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:27.322 12:42:49 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:27.322 12:42:49 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:27.322 12:42:49 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:27.322 12:42:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.322 12:42:49 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:27.322 12:42:49 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:27.322 12:42:49 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:27.322 12:42:49 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.322 12:42:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.322 [2024-12-10 12:42:49.439142] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:27.322 12:42:49 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.322 12:42:49 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:27.322 12:42:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:27.322 12:42:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.322 12:42:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.322 ************************************ 00:34:27.322 START TEST fio_dif_1_default 00:34:27.322 ************************************ 00:34:27.322 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:27.322 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:27.322 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:27.322 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:27.322 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:27.322 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:27.322 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:27.322 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.322 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:27.581 bdev_null0 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:27.581 [2024-12-10 12:42:49.515517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:27.581 { 00:34:27.581 "params": { 00:34:27.581 "name": "Nvme$subsystem", 00:34:27.581 "trtype": "$TEST_TRANSPORT", 00:34:27.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:27.581 "adrfam": "ipv4", 00:34:27.581 "trsvcid": "$NVMF_PORT", 00:34:27.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:27.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:27.581 "hdgst": ${hdgst:-false}, 00:34:27.581 "ddgst": ${ddgst:-false} 00:34:27.581 }, 00:34:27.581 "method": "bdev_nvme_attach_controller" 00:34:27.581 } 00:34:27.581 EOF 00:34:27.581 )") 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:27.581 "params": { 00:34:27.581 "name": "Nvme0", 00:34:27.581 "trtype": "tcp", 00:34:27.581 "traddr": "10.0.0.2", 00:34:27.581 "adrfam": "ipv4", 00:34:27.581 "trsvcid": "4420", 00:34:27.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:27.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:27.581 "hdgst": false, 00:34:27.581 "ddgst": false 00:34:27.581 }, 00:34:27.581 "method": "bdev_nvme_attach_controller" 00:34:27.581 }' 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:27.581 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:27.582 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:27.582 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:27.582 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:27.582 12:42:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:27.840 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:27.840 fio-3.35 00:34:27.840 Starting 1 thread 00:34:40.130 00:34:40.130 filename0: (groupid=0, jobs=1): err= 0: pid=1885688: Tue Dec 10 12:43:00 2024 00:34:40.130 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:34:40.130 slat (nsec): min=6035, max=26597, avg=6522.22, stdev=1405.93 00:34:40.130 clat (usec): min=40855, max=45303, avg=41008.15, stdev=291.77 00:34:40.130 lat (usec): min=40861, max=45330, avg=41014.67, stdev=292.25 00:34:40.130 clat percentiles (usec): 00:34:40.130 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:40.130 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:40.130 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:40.130 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:34:40.130 | 99.99th=[45351] 00:34:40.130 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:34:40.130 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:40.130 lat (msec) : 50=100.00% 00:34:40.130 cpu : usr=92.98%, sys=6.76%, ctx=10, majf=0, minf=0 00:34:40.130 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.130 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.130 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:40.130 00:34:40.130 Run status group 0 (all jobs): 00:34:40.130 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.130 00:34:40.130 real 0m11.070s 00:34:40.130 user 0m16.221s 00:34:40.130 sys 0m0.975s 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:40.130 ************************************ 00:34:40.130 END TEST fio_dif_1_default 00:34:40.130 ************************************ 00:34:40.130 12:43:00 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:40.130 12:43:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:40.130 12:43:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:40.130 12:43:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:40.130 ************************************ 00:34:40.130 START TEST fio_dif_1_multi_subsystems 00:34:40.130 ************************************ 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.130 bdev_null0 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.130 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.131 [2024-12-10 12:43:00.658307] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.131 bdev_null1 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:40.131 { 00:34:40.131 "params": { 00:34:40.131 "name": "Nvme$subsystem", 00:34:40.131 "trtype": "$TEST_TRANSPORT", 00:34:40.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.131 "adrfam": "ipv4", 00:34:40.131 "trsvcid": "$NVMF_PORT", 00:34:40.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.131 "hdgst": ${hdgst:-false}, 00:34:40.131 "ddgst": ${ddgst:-false} 00:34:40.131 }, 00:34:40.131 "method": "bdev_nvme_attach_controller" 00:34:40.131 } 00:34:40.131 EOF 00:34:40.131 )") 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:40.131 { 00:34:40.131 "params": { 00:34:40.131 "name": "Nvme$subsystem", 00:34:40.131 "trtype": "$TEST_TRANSPORT", 00:34:40.131 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.131 "adrfam": "ipv4", 00:34:40.131 "trsvcid": "$NVMF_PORT", 00:34:40.131 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.131 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.131 "hdgst": ${hdgst:-false}, 00:34:40.131 "ddgst": ${ddgst:-false} 00:34:40.131 }, 00:34:40.131 "method": "bdev_nvme_attach_controller" 00:34:40.131 } 00:34:40.131 EOF 00:34:40.131 )") 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:40.131 "params": { 00:34:40.131 "name": "Nvme0", 00:34:40.131 "trtype": "tcp", 00:34:40.131 "traddr": "10.0.0.2", 00:34:40.131 "adrfam": "ipv4", 00:34:40.131 "trsvcid": "4420", 00:34:40.131 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:40.131 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:40.131 "hdgst": false, 00:34:40.131 "ddgst": false 00:34:40.131 }, 00:34:40.131 "method": "bdev_nvme_attach_controller" 00:34:40.131 },{ 00:34:40.131 "params": { 00:34:40.131 "name": "Nvme1", 00:34:40.131 "trtype": "tcp", 00:34:40.131 "traddr": "10.0.0.2", 00:34:40.131 "adrfam": "ipv4", 00:34:40.131 "trsvcid": "4420", 00:34:40.131 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:40.131 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:40.131 "hdgst": false, 00:34:40.131 "ddgst": false 00:34:40.131 }, 00:34:40.131 "method": "bdev_nvme_attach_controller" 00:34:40.131 }' 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:40.131 12:43:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.131 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:40.131 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:40.131 fio-3.35 00:34:40.131 Starting 2 threads 00:34:50.136 00:34:50.136 filename0: (groupid=0, jobs=1): err= 0: pid=1887657: Tue Dec 10 12:43:11 2024 00:34:50.136 read: IOPS=106, BW=425KiB/s (435kB/s)(4256KiB/10011msec) 00:34:50.136 slat (nsec): min=6134, max=25827, avg=7811.81, stdev=2512.93 00:34:50.136 clat (usec): min=365, max=42463, avg=37611.48, stdev=11182.83 00:34:50.136 lat (usec): min=372, max=42470, avg=37619.30, stdev=11182.82 00:34:50.136 clat percentiles (usec): 00:34:50.136 | 1.00th=[ 375], 5.00th=[ 396], 10.00th=[40633], 20.00th=[41157], 00:34:50.136 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:50.136 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:50.136 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:50.136 | 99.99th=[42206] 00:34:50.136 bw ( KiB/s): min= 384, max= 640, per=52.02%, avg=424.00, stdev=71.84, samples=20 00:34:50.136 iops : min= 96, max= 160, avg=106.00, stdev=17.96, samples=20 00:34:50.136 lat (usec) : 500=8.27% 00:34:50.136 lat (msec) : 50=91.73% 00:34:50.136 cpu : usr=96.94%, sys=2.81%, ctx=11, majf=0, minf=45 00:34:50.136 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.136 issued rwts: total=1064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.136 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:50.136 filename1: (groupid=0, jobs=1): err= 0: pid=1887658: Tue Dec 10 12:43:11 2024 00:34:50.136 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10006msec) 00:34:50.136 slat (nsec): min=6153, max=32234, avg=7980.24, stdev=2781.28 00:34:50.136 clat (usec): min=40792, max=41991, avg=40983.79, stdev=97.55 00:34:50.136 lat (usec): min=40798, max=42002, avg=40991.77, stdev=98.03 00:34:50.136 clat percentiles (usec): 00:34:50.136 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:50.136 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:50.136 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:50.136 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:50.136 | 99.99th=[42206] 00:34:50.136 bw ( KiB/s): min= 384, max= 416, per=47.60%, avg=388.80, stdev=11.72, samples=20 00:34:50.136 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:50.136 lat (msec) : 50=100.00% 00:34:50.136 cpu : usr=96.58%, sys=3.17%, ctx=11, majf=0, minf=49 00:34:50.136 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:50.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.136 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.136 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:50.136 00:34:50.136 Run status group 0 (all jobs): 00:34:50.136 READ: bw=815KiB/s (835kB/s), 390KiB/s-425KiB/s (400kB/s-435kB/s), io=8160KiB (8356kB), run=10006-10011msec 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.136 00:34:50.136 real 0m11.248s 00:34:50.136 user 0m26.340s 00:34:50.136 sys 0m0.902s 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:50.136 12:43:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:50.136 ************************************ 00:34:50.136 END TEST fio_dif_1_multi_subsystems 00:34:50.136 ************************************ 00:34:50.136 12:43:11 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:50.136 12:43:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:50.136 12:43:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:50.136 12:43:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:50.136 ************************************ 00:34:50.136 START TEST fio_dif_rand_params 00:34:50.136 ************************************ 00:34:50.136 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:50.136 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:50.136 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.137 bdev_null0 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:50.137 [2024-12-10 12:43:11.977553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:50.137 { 00:34:50.137 "params": { 00:34:50.137 "name": "Nvme$subsystem", 00:34:50.137 "trtype": "$TEST_TRANSPORT", 00:34:50.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:50.137 "adrfam": "ipv4", 00:34:50.137 "trsvcid": "$NVMF_PORT", 00:34:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:50.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:50.137 "hdgst": ${hdgst:-false}, 00:34:50.137 "ddgst": ${ddgst:-false} 00:34:50.137 }, 00:34:50.137 "method": "bdev_nvme_attach_controller" 00:34:50.137 } 00:34:50.137 EOF 00:34:50.137 )") 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:50.137 12:43:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:50.137 "params": { 00:34:50.137 "name": "Nvme0", 00:34:50.137 "trtype": "tcp", 00:34:50.137 "traddr": "10.0.0.2", 00:34:50.137 "adrfam": "ipv4", 00:34:50.137 "trsvcid": "4420", 00:34:50.137 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:50.137 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:50.137 "hdgst": false, 00:34:50.137 "ddgst": false 00:34:50.137 }, 00:34:50.137 "method": "bdev_nvme_attach_controller" 00:34:50.137 }' 00:34:50.137 12:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:50.137 12:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:50.137 12:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:50.137 12:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:50.137 12:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:50.137 12:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:50.137 12:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:50.137 12:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:50.137 12:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:50.137 12:43:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:50.401 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:50.401 ... 00:34:50.401 fio-3.35 00:34:50.401 Starting 3 threads 00:34:56.961 00:34:56.961 filename0: (groupid=0, jobs=1): err= 0: pid=1889635: Tue Dec 10 12:43:17 2024 00:34:56.961 read: IOPS=300, BW=37.6MiB/s (39.4MB/s)(190MiB/5043msec) 00:34:56.961 slat (nsec): min=6337, max=26651, avg=10922.65, stdev=2134.39 00:34:56.961 clat (usec): min=3517, max=88475, avg=9931.42, stdev=6596.65 00:34:56.961 lat (usec): min=3524, max=88482, avg=9942.34, stdev=6596.75 00:34:56.961 clat percentiles (usec): 00:34:56.961 | 1.00th=[ 3851], 5.00th=[ 4686], 10.00th=[ 7439], 20.00th=[ 8160], 00:34:56.961 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:34:56.961 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:34:56.961 | 99.00th=[49021], 99.50th=[50594], 99.90th=[52167], 99.95th=[88605], 00:34:56.961 | 99.99th=[88605] 00:34:56.961 bw ( KiB/s): min=22784, max=43776, per=34.25%, avg=38784.00, stdev=5814.26, samples=10 00:34:56.961 iops : min= 178, max= 342, avg=303.00, stdev=45.42, samples=10 00:34:56.961 lat (msec) : 4=1.71%, 10=75.15%, 20=20.70%, 50=1.78%, 100=0.66% 00:34:56.961 cpu : usr=94.55%, sys=5.16%, ctx=10, majf=0, minf=55 00:34:56.961 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.961 issued rwts: total=1517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.961 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:56.961 filename0: (groupid=0, jobs=1): err= 0: pid=1889636: Tue Dec 10 12:43:17 2024 00:34:56.961 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(188MiB/5044msec) 00:34:56.961 slat (nsec): min=6410, max=38104, avg=11220.96, stdev=2283.17 00:34:56.961 clat (usec): min=3652, max=89547, avg=10007.15, stdev=5167.53 00:34:56.961 lat (usec): min=3662, max=89553, avg=10018.37, stdev=5167.92 00:34:56.961 clat percentiles (usec): 00:34:56.961 | 1.00th=[ 3851], 5.00th=[ 5014], 10.00th=[ 6718], 20.00th=[ 8291], 00:34:56.961 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10159], 00:34:56.961 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11863], 95.00th=[12256], 00:34:56.961 | 99.00th=[47449], 99.50th=[50070], 99.90th=[51643], 99.95th=[89654], 00:34:56.961 | 99.99th=[89654] 00:34:56.961 bw ( KiB/s): min=34304, max=46592, per=34.00%, avg=38502.40, stdev=3329.53, samples=10 00:34:56.961 iops : min= 268, max= 364, avg=300.80, stdev=26.01, samples=10 00:34:56.961 lat (msec) : 4=1.86%, 10=52.92%, 20=43.96%, 50=0.66%, 100=0.60% 00:34:56.961 cpu : usr=94.59%, sys=5.12%, ctx=7, majf=0, minf=40 00:34:56.961 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.961 issued rwts: total=1506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.961 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:56.961 filename0: (groupid=0, jobs=1): err= 0: pid=1889637: Tue Dec 10 12:43:17 2024 00:34:56.961 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(180MiB/5046msec) 00:34:56.961 slat (nsec): min=6352, max=34862, avg=11168.94, stdev=2211.62 00:34:56.961 clat (usec): min=3839, max=52276, avg=10460.19, stdev=5695.25 00:34:56.961 lat (usec): min=3846, max=52289, avg=10471.36, stdev=5695.49 00:34:56.961 clat percentiles (usec): 00:34:56.961 | 1.00th=[ 4228], 5.00th=[ 6521], 10.00th=[ 7439], 20.00th=[ 8455], 00:34:56.961 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:34:56.961 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11863], 95.00th=[12518], 00:34:56.961 | 99.00th=[47449], 99.50th=[50070], 99.90th=[52167], 99.95th=[52167], 00:34:56.961 | 99.99th=[52167] 00:34:56.961 bw ( KiB/s): min=27904, max=40704, per=32.53%, avg=36838.40, stdev=3667.25, samples=10 00:34:56.961 iops : min= 218, max= 318, avg=287.80, stdev=28.65, samples=10 00:34:56.961 lat (msec) : 4=0.28%, 10=53.37%, 20=44.34%, 50=1.46%, 100=0.56% 00:34:56.961 cpu : usr=94.47%, sys=5.21%, ctx=10, majf=0, minf=44 00:34:56.961 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:56.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:56.961 issued rwts: total=1441,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:56.961 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:56.961 00:34:56.961 Run status group 0 (all jobs): 00:34:56.962 READ: bw=111MiB/s (116MB/s), 35.7MiB/s-37.6MiB/s (37.4MB/s-39.4MB/s), io=558MiB (585MB), run=5043-5046msec 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 bdev_null0 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 [2024-12-10 12:43:18.219135] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 bdev_null1 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 bdev_null2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:56.962 { 00:34:56.962 "params": { 00:34:56.962 "name": "Nvme$subsystem", 00:34:56.962 "trtype": "$TEST_TRANSPORT", 00:34:56.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.962 "adrfam": "ipv4", 00:34:56.962 "trsvcid": "$NVMF_PORT", 00:34:56.962 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.962 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.962 "hdgst": ${hdgst:-false}, 00:34:56.962 "ddgst": ${ddgst:-false} 00:34:56.962 }, 00:34:56.962 "method": "bdev_nvme_attach_controller" 00:34:56.962 } 00:34:56.962 EOF 00:34:56.962 )") 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:56.962 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:56.962 { 00:34:56.962 "params": { 00:34:56.962 "name": "Nvme$subsystem", 00:34:56.962 "trtype": "$TEST_TRANSPORT", 00:34:56.962 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.963 "adrfam": "ipv4", 00:34:56.963 "trsvcid": "$NVMF_PORT", 00:34:56.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.963 "hdgst": ${hdgst:-false}, 00:34:56.963 "ddgst": ${ddgst:-false} 00:34:56.963 }, 00:34:56.963 "method": "bdev_nvme_attach_controller" 00:34:56.963 } 00:34:56.963 EOF 00:34:56.963 )") 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:56.963 { 00:34:56.963 "params": { 00:34:56.963 "name": "Nvme$subsystem", 00:34:56.963 "trtype": "$TEST_TRANSPORT", 00:34:56.963 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:56.963 "adrfam": "ipv4", 00:34:56.963 "trsvcid": "$NVMF_PORT", 00:34:56.963 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:56.963 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:56.963 "hdgst": ${hdgst:-false}, 00:34:56.963 "ddgst": ${ddgst:-false} 00:34:56.963 }, 00:34:56.963 "method": "bdev_nvme_attach_controller" 00:34:56.963 } 00:34:56.963 EOF 00:34:56.963 )") 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:56.963 "params": { 00:34:56.963 "name": "Nvme0", 00:34:56.963 "trtype": "tcp", 00:34:56.963 "traddr": "10.0.0.2", 00:34:56.963 "adrfam": "ipv4", 00:34:56.963 "trsvcid": "4420", 00:34:56.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:56.963 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:56.963 "hdgst": false, 00:34:56.963 "ddgst": false 00:34:56.963 }, 00:34:56.963 "method": "bdev_nvme_attach_controller" 00:34:56.963 },{ 00:34:56.963 "params": { 00:34:56.963 "name": "Nvme1", 00:34:56.963 "trtype": "tcp", 00:34:56.963 "traddr": "10.0.0.2", 00:34:56.963 "adrfam": "ipv4", 00:34:56.963 "trsvcid": "4420", 00:34:56.963 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:56.963 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:56.963 "hdgst": false, 00:34:56.963 "ddgst": false 00:34:56.963 }, 00:34:56.963 "method": "bdev_nvme_attach_controller" 00:34:56.963 },{ 00:34:56.963 "params": { 00:34:56.963 "name": "Nvme2", 00:34:56.963 "trtype": "tcp", 00:34:56.963 "traddr": "10.0.0.2", 00:34:56.963 "adrfam": "ipv4", 00:34:56.963 "trsvcid": "4420", 00:34:56.963 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:56.963 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:56.963 "hdgst": false, 00:34:56.963 "ddgst": false 00:34:56.963 }, 00:34:56.963 "method": "bdev_nvme_attach_controller" 00:34:56.963 }' 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:56.963 12:43:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:56.963 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:56.963 ... 00:34:56.963 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:56.963 ... 00:34:56.963 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:56.963 ... 00:34:56.963 fio-3.35 00:34:56.963 Starting 24 threads 00:35:09.159 00:35:09.159 filename0: (groupid=0, jobs=1): err= 0: pid=1890757: Tue Dec 10 12:43:29 2024 00:35:09.159 read: IOPS=76, BW=304KiB/s (312kB/s)(3072KiB/10096msec) 00:35:09.159 slat (nsec): min=6918, max=38690, avg=8784.35, stdev=3313.86 00:35:09.159 clat (msec): min=77, max=284, avg=209.40, stdev=33.38 00:35:09.159 lat (msec): min=77, max=284, avg=209.41, stdev=33.38 00:35:09.159 clat percentiles (msec): 00:35:09.159 | 1.00th=[ 78], 5.00th=[ 127], 10.00th=[ 184], 20.00th=[ 213], 00:35:09.159 | 30.00th=[ 215], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 222], 00:35:09.159 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 226], 95.00th=[ 228], 00:35:09.159 | 99.00th=[ 230], 99.50th=[ 230], 99.90th=[ 284], 99.95th=[ 284], 00:35:09.159 | 99.99th=[ 284] 00:35:09.159 bw ( KiB/s): min= 256, max= 512, per=4.59%, avg=300.80, stdev=71.29, samples=20 00:35:09.159 iops : min= 64, max= 128, avg=75.20, stdev=17.82, samples=20 00:35:09.159 lat (msec) : 100=4.17%, 250=95.57%, 500=0.26% 00:35:09.159 cpu : usr=98.64%, sys=0.98%, ctx=10, majf=0, minf=9 00:35:09.159 IO depths : 1=1.2%, 2=7.4%, 4=25.0%, 8=55.1%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.159 filename0: (groupid=0, jobs=1): err= 0: pid=1890758: Tue Dec 10 12:43:29 2024 00:35:09.159 read: IOPS=67, BW=269KiB/s (275kB/s)(2712KiB/10086msec) 00:35:09.159 slat (nsec): min=6936, max=75603, avg=15830.69, stdev=17743.37 00:35:09.159 clat (msec): min=172, max=409, avg=237.41, stdev=39.19 00:35:09.159 lat (msec): min=172, max=409, avg=237.42, stdev=39.20 00:35:09.159 clat percentiles (msec): 00:35:09.159 | 1.00th=[ 174], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 220], 00:35:09.159 | 30.00th=[ 222], 40.00th=[ 222], 50.00th=[ 224], 60.00th=[ 224], 00:35:09.159 | 70.00th=[ 226], 80.00th=[ 228], 90.00th=[ 305], 95.00th=[ 334], 00:35:09.159 | 99.00th=[ 355], 99.50th=[ 355], 99.90th=[ 409], 99.95th=[ 409], 00:35:09.159 | 99.99th=[ 409] 00:35:09.159 bw ( KiB/s): min= 128, max= 384, per=4.04%, avg=264.80, stdev=59.52, samples=20 00:35:09.159 iops : min= 32, max= 96, avg=66.20, stdev=14.88, samples=20 00:35:09.159 lat (msec) : 250=81.71%, 500=18.29% 00:35:09.159 cpu : usr=98.74%, sys=0.87%, ctx=13, majf=0, minf=9 00:35:09.159 IO depths : 1=2.8%, 2=6.0%, 4=15.9%, 8=65.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:35:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 complete : 0=0.0%, 4=91.4%, 8=3.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.159 filename0: (groupid=0, jobs=1): err= 0: pid=1890760: Tue Dec 10 12:43:29 2024 00:35:09.159 read: IOPS=67, BW=269KiB/s (275kB/s)(2704KiB/10057msec) 00:35:09.159 slat (nsec): min=6969, max=29065, avg=8838.51, stdev=2618.51 00:35:09.159 clat (msec): min=76, max=590, avg=237.87, stdev=64.09 00:35:09.159 lat (msec): min=76, max=590, avg=237.88, stdev=64.09 00:35:09.159 clat percentiles (msec): 00:35:09.159 | 1.00th=[ 77], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 197], 00:35:09.159 | 30.00th=[ 209], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 224], 00:35:09.159 | 70.00th=[ 249], 80.00th=[ 257], 90.00th=[ 330], 95.00th=[ 338], 00:35:09.159 | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 592], 99.95th=[ 592], 00:35:09.159 | 99.99th=[ 592] 00:35:09.159 bw ( KiB/s): min= 112, max= 304, per=4.04%, avg=264.00, stdev=50.73, samples=20 00:35:09.159 iops : min= 28, max= 76, avg=66.00, stdev=12.68, samples=20 00:35:09.159 lat (msec) : 100=1.48%, 250=69.53%, 500=28.70%, 750=0.30% 00:35:09.159 cpu : usr=98.78%, sys=0.86%, ctx=12, majf=0, minf=11 00:35:09.159 IO depths : 1=0.1%, 2=0.4%, 4=5.9%, 8=80.3%, 16=13.2%, 32=0.0%, >=64=0.0% 00:35:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 complete : 0=0.0%, 4=88.4%, 8=7.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 issued rwts: total=676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.159 filename0: (groupid=0, jobs=1): err= 0: pid=1890761: Tue Dec 10 12:43:29 2024 00:35:09.159 read: IOPS=76, BW=304KiB/s (312kB/s)(3072KiB/10096msec) 00:35:09.159 slat (nsec): min=6986, max=57884, avg=11923.62, stdev=5341.75 00:35:09.159 clat (msec): min=77, max=286, avg=210.22, stdev=30.85 00:35:09.159 lat (msec): min=77, max=286, avg=210.23, stdev=30.85 00:35:09.159 clat percentiles (msec): 00:35:09.159 | 1.00th=[ 79], 5.00th=[ 146], 10.00th=[ 199], 20.00th=[ 207], 00:35:09.159 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 222], 00:35:09.159 | 70.00th=[ 222], 80.00th=[ 224], 90.00th=[ 226], 95.00th=[ 228], 00:35:09.159 | 99.00th=[ 228], 99.50th=[ 228], 99.90th=[ 288], 99.95th=[ 288], 00:35:09.159 | 99.99th=[ 288] 00:35:09.159 bw ( KiB/s): min= 256, max= 384, per=4.59%, avg=300.80, stdev=57.95, samples=20 00:35:09.159 iops : min= 64, max= 96, avg=75.20, stdev=14.49, samples=20 00:35:09.159 lat (msec) : 100=4.17%, 250=95.57%, 500=0.26% 00:35:09.159 cpu : usr=98.83%, sys=0.79%, ctx=15, majf=0, minf=9 00:35:09.159 IO depths : 1=2.5%, 2=8.7%, 4=25.0%, 8=53.8%, 16=10.0%, 32=0.0%, >=64=0.0% 00:35:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.159 filename0: (groupid=0, jobs=1): err= 0: pid=1890762: Tue Dec 10 12:43:29 2024 00:35:09.159 read: IOPS=72, BW=290KiB/s (297kB/s)(2928KiB/10095msec) 00:35:09.159 slat (nsec): min=6961, max=36035, avg=10385.39, stdev=4634.36 00:35:09.159 clat (msec): min=78, max=334, avg=220.21, stdev=43.49 00:35:09.159 lat (msec): min=78, max=334, avg=220.22, stdev=43.49 00:35:09.159 clat percentiles (msec): 00:35:09.159 | 1.00th=[ 79], 5.00th=[ 163], 10.00th=[ 184], 20.00th=[ 203], 00:35:09.159 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.159 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 266], 95.00th=[ 313], 00:35:09.159 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 334], 99.95th=[ 334], 00:35:09.159 | 99.99th=[ 334] 00:35:09.159 bw ( KiB/s): min= 224, max= 384, per=4.38%, avg=286.40, stdev=45.52, samples=20 00:35:09.159 iops : min= 56, max= 96, avg=71.60, stdev=11.38, samples=20 00:35:09.159 lat (msec) : 100=4.10%, 250=78.96%, 500=16.94% 00:35:09.159 cpu : usr=98.71%, sys=0.90%, ctx=10, majf=0, minf=9 00:35:09.159 IO depths : 1=0.4%, 2=2.7%, 4=12.7%, 8=71.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:35:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 complete : 0=0.0%, 4=90.5%, 8=4.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 issued rwts: total=732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.159 filename0: (groupid=0, jobs=1): err= 0: pid=1890763: Tue Dec 10 12:43:29 2024 00:35:09.159 read: IOPS=74, BW=298KiB/s (305kB/s)(3008KiB/10095msec) 00:35:09.159 slat (nsec): min=6966, max=40696, avg=11550.62, stdev=5333.98 00:35:09.159 clat (msec): min=68, max=277, avg=213.84, stdev=31.36 00:35:09.159 lat (msec): min=68, max=277, avg=213.85, stdev=31.36 00:35:09.159 clat percentiles (msec): 00:35:09.159 | 1.00th=[ 79], 5.00th=[ 146], 10.00th=[ 203], 20.00th=[ 215], 00:35:09.159 | 30.00th=[ 218], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.159 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 226], 95.00th=[ 228], 00:35:09.159 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 279], 00:35:09.159 | 99.99th=[ 279] 00:35:09.159 bw ( KiB/s): min= 256, max= 384, per=4.50%, avg=294.40, stdev=55.28, samples=20 00:35:09.159 iops : min= 64, max= 96, avg=73.60, stdev=13.82, samples=20 00:35:09.159 lat (msec) : 100=4.26%, 250=93.35%, 500=2.39% 00:35:09.159 cpu : usr=98.51%, sys=1.11%, ctx=14, majf=0, minf=9 00:35:09.159 IO depths : 1=0.7%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:09.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.159 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.159 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.159 filename0: (groupid=0, jobs=1): err= 0: pid=1890764: Tue Dec 10 12:43:29 2024 00:35:09.159 read: IOPS=68, BW=274KiB/s (281kB/s)(2760KiB/10057msec) 00:35:09.159 slat (nsec): min=5680, max=31729, avg=9868.67, stdev=3791.65 00:35:09.159 clat (msec): min=171, max=391, avg=232.37, stdev=32.62 00:35:09.159 lat (msec): min=171, max=391, avg=232.38, stdev=32.62 00:35:09.159 clat percentiles (msec): 00:35:09.159 | 1.00th=[ 205], 5.00th=[ 211], 10.00th=[ 215], 20.00th=[ 218], 00:35:09.159 | 30.00th=[ 220], 40.00th=[ 222], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.159 | 70.00th=[ 226], 80.00th=[ 226], 90.00th=[ 275], 95.00th=[ 321], 00:35:09.160 | 99.00th=[ 347], 99.50th=[ 351], 99.90th=[ 393], 99.95th=[ 393], 00:35:09.160 | 99.99th=[ 393] 00:35:09.160 bw ( KiB/s): min= 128, max= 384, per=4.12%, avg=269.60, stdev=55.00, samples=20 00:35:09.160 iops : min= 32, max= 96, avg=67.40, stdev=13.75, samples=20 00:35:09.160 lat (msec) : 250=86.09%, 500=13.91% 00:35:09.160 cpu : usr=98.73%, sys=0.90%, ctx=8, majf=0, minf=9 00:35:09.160 IO depths : 1=1.7%, 2=3.8%, 4=12.2%, 8=71.4%, 16=10.9%, 32=0.0%, >=64=0.0% 00:35:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 complete : 0=0.0%, 4=90.3%, 8=4.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 issued rwts: total=690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.160 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.160 filename0: (groupid=0, jobs=1): err= 0: pid=1890765: Tue Dec 10 12:43:29 2024 00:35:09.160 read: IOPS=71, BW=287KiB/s (294kB/s)(2888KiB/10067msec) 00:35:09.160 slat (nsec): min=6951, max=35262, avg=9619.93, stdev=5312.53 00:35:09.160 clat (msec): min=68, max=330, avg=222.90, stdev=28.09 00:35:09.160 lat (msec): min=68, max=330, avg=222.91, stdev=28.09 00:35:09.160 clat percentiles (msec): 00:35:09.160 | 1.00th=[ 174], 5.00th=[ 194], 10.00th=[ 207], 20.00th=[ 215], 00:35:09.160 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.160 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 228], 95.00th=[ 284], 00:35:09.160 | 99.00th=[ 317], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:35:09.160 | 99.99th=[ 330] 00:35:09.160 bw ( KiB/s): min= 128, max= 368, per=4.32%, avg=282.40, stdev=55.49, samples=20 00:35:09.160 iops : min= 32, max= 92, avg=70.60, stdev=13.87, samples=20 00:35:09.160 lat (msec) : 100=0.83%, 250=91.41%, 500=7.76% 00:35:09.160 cpu : usr=98.60%, sys=1.02%, ctx=14, majf=0, minf=10 00:35:09.160 IO depths : 1=0.6%, 2=1.2%, 4=8.2%, 8=78.0%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 complete : 0=0.0%, 4=89.3%, 8=5.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.160 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.160 filename1: (groupid=0, jobs=1): err= 0: pid=1890766: Tue Dec 10 12:43:29 2024 00:35:09.160 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10057msec) 00:35:09.160 slat (nsec): min=5935, max=32822, avg=8784.92, stdev=2963.35 00:35:09.160 clat (msec): min=162, max=495, avg=324.32, stdev=59.95 00:35:09.160 lat (msec): min=162, max=495, avg=324.33, stdev=59.95 00:35:09.160 clat percentiles (msec): 00:35:09.160 | 1.00th=[ 163], 5.00th=[ 218], 10.00th=[ 226], 20.00th=[ 313], 00:35:09.160 | 30.00th=[ 321], 40.00th=[ 321], 50.00th=[ 321], 60.00th=[ 330], 00:35:09.160 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 351], 95.00th=[ 443], 00:35:09.160 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 498], 99.95th=[ 498], 00:35:09.160 | 99.99th=[ 498] 00:35:09.160 bw ( KiB/s): min= 128, max= 256, per=2.94%, avg=192.00, stdev=64.21, samples=20 00:35:09.160 iops : min= 32, max= 64, avg=48.00, stdev=16.05, samples=20 00:35:09.160 lat (msec) : 250=11.69%, 500=88.31% 00:35:09.160 cpu : usr=98.54%, sys=1.10%, ctx=11, majf=0, minf=9 00:35:09.160 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.160 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.160 filename1: (groupid=0, jobs=1): err= 0: pid=1890767: Tue Dec 10 12:43:29 2024 00:35:09.160 read: IOPS=70, BW=282KiB/s (289kB/s)(2848KiB/10096msec) 00:35:09.160 slat (nsec): min=6952, max=36635, avg=10659.09, stdev=5009.48 00:35:09.160 clat (msec): min=78, max=348, avg=226.29, stdev=52.55 00:35:09.160 lat (msec): min=78, max=348, avg=226.30, stdev=52.55 00:35:09.160 clat percentiles (msec): 00:35:09.160 | 1.00th=[ 79], 5.00th=[ 146], 10.00th=[ 199], 20.00th=[ 205], 00:35:09.160 | 30.00th=[ 207], 40.00th=[ 218], 50.00th=[ 222], 60.00th=[ 226], 00:35:09.160 | 70.00th=[ 234], 80.00th=[ 239], 90.00th=[ 321], 95.00th=[ 342], 00:35:09.160 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 351], 99.95th=[ 351], 00:35:09.160 | 99.99th=[ 351] 00:35:09.160 bw ( KiB/s): min= 224, max= 384, per=4.26%, avg=278.40, stdev=43.25, samples=20 00:35:09.160 iops : min= 56, max= 96, avg=69.60, stdev=10.81, samples=20 00:35:09.160 lat (msec) : 100=4.49%, 250=80.62%, 500=14.89% 00:35:09.160 cpu : usr=98.60%, sys=0.99%, ctx=12, majf=0, minf=9 00:35:09.160 IO depths : 1=0.6%, 2=1.8%, 4=9.0%, 8=76.0%, 16=12.6%, 32=0.0%, >=64=0.0% 00:35:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 complete : 0=0.0%, 4=89.3%, 8=6.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 issued rwts: total=712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.160 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.160 filename1: (groupid=0, jobs=1): err= 0: pid=1890768: Tue Dec 10 12:43:29 2024 00:35:09.160 read: IOPS=67, BW=269KiB/s (275kB/s)(2704KiB/10067msec) 00:35:09.160 slat (nsec): min=6988, max=51589, avg=9221.56, stdev=4203.75 00:35:09.160 clat (msec): min=163, max=413, avg=237.23, stdev=45.10 00:35:09.160 lat (msec): min=163, max=413, avg=237.24, stdev=45.10 00:35:09.160 clat percentiles (msec): 00:35:09.160 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 199], 20.00th=[ 207], 00:35:09.160 | 30.00th=[ 211], 40.00th=[ 218], 50.00th=[ 222], 60.00th=[ 226], 00:35:09.160 | 70.00th=[ 241], 80.00th=[ 245], 90.00th=[ 326], 95.00th=[ 342], 00:35:09.160 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 414], 99.95th=[ 414], 00:35:09.160 | 99.99th=[ 414] 00:35:09.160 bw ( KiB/s): min= 128, max= 368, per=4.09%, avg=268.00, stdev=52.91, samples=20 00:35:09.160 iops : min= 32, max= 92, avg=67.00, stdev=13.23, samples=20 00:35:09.160 lat (msec) : 250=81.66%, 500=18.34% 00:35:09.160 cpu : usr=98.91%, sys=0.72%, ctx=11, majf=0, minf=9 00:35:09.160 IO depths : 1=0.3%, 2=1.3%, 4=8.1%, 8=77.2%, 16=13.0%, 32=0.0%, >=64=0.0% 00:35:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 complete : 0=0.0%, 4=89.0%, 8=6.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 issued rwts: total=676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.160 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.160 filename1: (groupid=0, jobs=1): err= 0: pid=1890770: Tue Dec 10 12:43:29 2024 00:35:09.160 read: IOPS=69, BW=279KiB/s (286kB/s)(2816KiB/10076msec) 00:35:09.160 slat (nsec): min=5867, max=28668, avg=8779.55, stdev=2933.39 00:35:09.160 clat (msec): min=164, max=400, avg=227.88, stdev=31.27 00:35:09.160 lat (msec): min=164, max=400, avg=227.89, stdev=31.27 00:35:09.160 clat percentiles (msec): 00:35:09.160 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 207], 20.00th=[ 215], 00:35:09.160 | 30.00th=[ 220], 40.00th=[ 222], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.160 | 70.00th=[ 226], 80.00th=[ 226], 90.00th=[ 247], 95.00th=[ 317], 00:35:09.160 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 401], 99.95th=[ 401], 00:35:09.160 | 99.99th=[ 401] 00:35:09.160 bw ( KiB/s): min= 128, max= 336, per=4.27%, avg=279.20, stdev=48.27, samples=20 00:35:09.160 iops : min= 32, max= 84, avg=69.80, stdev=12.07, samples=20 00:35:09.160 lat (msec) : 250=90.34%, 500=9.66% 00:35:09.160 cpu : usr=98.63%, sys=0.99%, ctx=14, majf=0, minf=9 00:35:09.160 IO depths : 1=0.3%, 2=0.7%, 4=7.1%, 8=79.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:35:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 complete : 0=0.0%, 4=88.9%, 8=5.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.160 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.160 filename1: (groupid=0, jobs=1): err= 0: pid=1890771: Tue Dec 10 12:43:29 2024 00:35:09.160 read: IOPS=72, BW=292KiB/s (299kB/s)(2944KiB/10095msec) 00:35:09.160 slat (nsec): min=6934, max=40508, avg=9234.46, stdev=4054.43 00:35:09.160 clat (msec): min=78, max=338, avg=218.27, stdev=39.31 00:35:09.160 lat (msec): min=78, max=338, avg=218.28, stdev=39.30 00:35:09.160 clat percentiles (msec): 00:35:09.160 | 1.00th=[ 79], 5.00th=[ 163], 10.00th=[ 188], 20.00th=[ 209], 00:35:09.160 | 30.00th=[ 218], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.160 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 253], 95.00th=[ 271], 00:35:09.160 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:35:09.160 | 99.99th=[ 338] 00:35:09.160 bw ( KiB/s): min= 224, max= 384, per=4.41%, avg=288.00, stdev=44.96, samples=20 00:35:09.160 iops : min= 56, max= 96, avg=72.00, stdev=11.24, samples=20 00:35:09.160 lat (msec) : 100=4.35%, 250=84.24%, 500=11.41% 00:35:09.160 cpu : usr=98.75%, sys=0.88%, ctx=10, majf=0, minf=9 00:35:09.160 IO depths : 1=0.5%, 2=1.2%, 4=7.9%, 8=78.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:35:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 complete : 0=0.0%, 4=89.1%, 8=5.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.160 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.160 filename1: (groupid=0, jobs=1): err= 0: pid=1890772: Tue Dec 10 12:43:29 2024 00:35:09.160 read: IOPS=71, BW=286KiB/s (293kB/s)(2880KiB/10061msec) 00:35:09.160 slat (nsec): min=4185, max=59261, avg=10725.45, stdev=6171.22 00:35:09.160 clat (msec): min=87, max=395, avg=223.48, stdev=27.33 00:35:09.160 lat (msec): min=87, max=395, avg=223.49, stdev=27.33 00:35:09.160 clat percentiles (msec): 00:35:09.160 | 1.00th=[ 163], 5.00th=[ 192], 10.00th=[ 211], 20.00th=[ 215], 00:35:09.160 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 222], 00:35:09.160 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 228], 95.00th=[ 279], 00:35:09.160 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 397], 99.95th=[ 397], 00:35:09.160 | 99.99th=[ 397] 00:35:09.160 bw ( KiB/s): min= 128, max= 368, per=4.30%, avg=281.60, stdev=59.51, samples=20 00:35:09.160 iops : min= 32, max= 92, avg=70.40, stdev=14.88, samples=20 00:35:09.160 lat (msec) : 100=0.28%, 250=93.06%, 500=6.67% 00:35:09.160 cpu : usr=98.79%, sys=0.84%, ctx=9, majf=0, minf=9 00:35:09.160 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.4%, 32=0.0%, >=64=0.0% 00:35:09.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.160 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.160 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.160 filename1: (groupid=0, jobs=1): err= 0: pid=1890773: Tue Dec 10 12:43:29 2024 00:35:09.160 read: IOPS=67, BW=269KiB/s (275kB/s)(2704KiB/10058msec) 00:35:09.161 slat (nsec): min=4370, max=26244, avg=8669.11, stdev=2654.35 00:35:09.161 clat (msec): min=78, max=577, avg=237.87, stdev=64.29 00:35:09.161 lat (msec): min=78, max=577, avg=237.88, stdev=64.29 00:35:09.161 clat percentiles (msec): 00:35:09.161 | 1.00th=[ 79], 5.00th=[ 180], 10.00th=[ 188], 20.00th=[ 192], 00:35:09.161 | 30.00th=[ 209], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.161 | 70.00th=[ 251], 80.00th=[ 257], 90.00th=[ 330], 95.00th=[ 338], 00:35:09.161 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 575], 99.95th=[ 575], 00:35:09.161 | 99.99th=[ 575] 00:35:09.161 bw ( KiB/s): min= 112, max= 304, per=4.04%, avg=264.00, stdev=50.73, samples=20 00:35:09.161 iops : min= 28, max= 76, avg=66.00, stdev=12.68, samples=20 00:35:09.161 lat (msec) : 100=1.48%, 250=65.98%, 500=32.25%, 750=0.30% 00:35:09.161 cpu : usr=98.54%, sys=1.10%, ctx=14, majf=0, minf=9 00:35:09.161 IO depths : 1=0.1%, 2=0.4%, 4=5.9%, 8=80.3%, 16=13.2%, 32=0.0%, >=64=0.0% 00:35:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 complete : 0=0.0%, 4=88.4%, 8=7.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 issued rwts: total=676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.161 filename1: (groupid=0, jobs=1): err= 0: pid=1890774: Tue Dec 10 12:43:29 2024 00:35:09.161 read: IOPS=68, BW=274KiB/s (281kB/s)(2760KiB/10057msec) 00:35:09.161 slat (nsec): min=6927, max=28575, avg=9079.20, stdev=3109.00 00:35:09.161 clat (msec): min=58, max=494, avg=233.03, stdev=55.27 00:35:09.161 lat (msec): min=58, max=494, avg=233.04, stdev=55.27 00:35:09.161 clat percentiles (msec): 00:35:09.161 | 1.00th=[ 169], 5.00th=[ 203], 10.00th=[ 205], 20.00th=[ 215], 00:35:09.161 | 30.00th=[ 218], 40.00th=[ 222], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.161 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 288], 95.00th=[ 347], 00:35:09.161 | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 493], 99.95th=[ 493], 00:35:09.161 | 99.99th=[ 493] 00:35:09.161 bw ( KiB/s): min= 128, max= 384, per=4.12%, avg=269.60, stdev=74.01, samples=20 00:35:09.161 iops : min= 32, max= 96, avg=67.40, stdev=18.50, samples=20 00:35:09.161 lat (msec) : 100=0.87%, 250=87.25%, 500=11.88% 00:35:09.161 cpu : usr=98.78%, sys=0.85%, ctx=14, majf=0, minf=9 00:35:09.161 IO depths : 1=1.4%, 2=4.3%, 4=14.8%, 8=68.3%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 complete : 0=0.0%, 4=91.1%, 8=3.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 issued rwts: total=690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.161 filename2: (groupid=0, jobs=1): err= 0: pid=1890775: Tue Dec 10 12:43:29 2024 00:35:09.161 read: IOPS=70, BW=282KiB/s (289kB/s)(2840KiB/10057msec) 00:35:09.161 slat (nsec): min=5881, max=29179, avg=9184.73, stdev=2994.20 00:35:09.161 clat (msec): min=80, max=494, avg=226.42, stdev=47.67 00:35:09.161 lat (msec): min=80, max=494, avg=226.43, stdev=47.67 00:35:09.161 clat percentiles (msec): 00:35:09.161 | 1.00th=[ 81], 5.00th=[ 199], 10.00th=[ 209], 20.00th=[ 215], 00:35:09.161 | 30.00th=[ 218], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.161 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 228], 95.00th=[ 275], 00:35:09.161 | 99.00th=[ 493], 99.50th=[ 493], 99.90th=[ 493], 99.95th=[ 493], 00:35:09.161 | 99.99th=[ 493] 00:35:09.161 bw ( KiB/s): min= 128, max= 336, per=4.24%, avg=277.60, stdev=50.93, samples=20 00:35:09.161 iops : min= 32, max= 84, avg=69.40, stdev=12.73, samples=20 00:35:09.161 lat (msec) : 100=1.41%, 250=92.11%, 500=6.48% 00:35:09.161 cpu : usr=98.56%, sys=1.08%, ctx=14, majf=0, minf=9 00:35:09.161 IO depths : 1=0.6%, 2=2.0%, 4=10.4%, 8=75.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:35:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 complete : 0=0.0%, 4=89.9%, 8=4.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 issued rwts: total=710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.161 filename2: (groupid=0, jobs=1): err= 0: pid=1890776: Tue Dec 10 12:43:29 2024 00:35:09.161 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10056msec) 00:35:09.161 slat (nsec): min=5925, max=30457, avg=8867.22, stdev=3008.77 00:35:09.161 clat (msec): min=161, max=495, avg=324.30, stdev=58.93 00:35:09.161 lat (msec): min=161, max=495, avg=324.31, stdev=58.93 00:35:09.161 clat percentiles (msec): 00:35:09.161 | 1.00th=[ 203], 5.00th=[ 215], 10.00th=[ 224], 20.00th=[ 296], 00:35:09.161 | 30.00th=[ 321], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 334], 00:35:09.161 | 70.00th=[ 347], 80.00th=[ 347], 90.00th=[ 351], 95.00th=[ 443], 00:35:09.161 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 498], 99.95th=[ 498], 00:35:09.161 | 99.99th=[ 498] 00:35:09.161 bw ( KiB/s): min= 128, max= 256, per=2.94%, avg=192.00, stdev=59.64, samples=20 00:35:09.161 iops : min= 32, max= 64, avg=48.00, stdev=14.91, samples=20 00:35:09.161 lat (msec) : 250=11.69%, 500=88.31% 00:35:09.161 cpu : usr=98.73%, sys=0.91%, ctx=11, majf=0, minf=9 00:35:09.161 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.161 filename2: (groupid=0, jobs=1): err= 0: pid=1890777: Tue Dec 10 12:43:29 2024 00:35:09.161 read: IOPS=76, BW=304KiB/s (312kB/s)(3072KiB/10096msec) 00:35:09.161 slat (nsec): min=6954, max=62621, avg=12530.32, stdev=6528.92 00:35:09.161 clat (msec): min=77, max=263, avg=210.21, stdev=30.80 00:35:09.161 lat (msec): min=77, max=263, avg=210.22, stdev=30.80 00:35:09.161 clat percentiles (msec): 00:35:09.161 | 1.00th=[ 79], 5.00th=[ 146], 10.00th=[ 199], 20.00th=[ 209], 00:35:09.161 | 30.00th=[ 215], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 222], 00:35:09.161 | 70.00th=[ 222], 80.00th=[ 224], 90.00th=[ 226], 95.00th=[ 228], 00:35:09.161 | 99.00th=[ 228], 99.50th=[ 228], 99.90th=[ 264], 99.95th=[ 264], 00:35:09.161 | 99.99th=[ 264] 00:35:09.161 bw ( KiB/s): min= 256, max= 384, per=4.59%, avg=300.80, stdev=54.59, samples=20 00:35:09.161 iops : min= 64, max= 96, avg=75.20, stdev=13.65, samples=20 00:35:09.161 lat (msec) : 100=4.43%, 250=95.31%, 500=0.26% 00:35:09.161 cpu : usr=98.78%, sys=0.84%, ctx=10, majf=0, minf=9 00:35:09.161 IO depths : 1=0.7%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.161 filename2: (groupid=0, jobs=1): err= 0: pid=1890778: Tue Dec 10 12:43:29 2024 00:35:09.161 read: IOPS=70, BW=280KiB/s (287kB/s)(2824KiB/10075msec) 00:35:09.161 slat (nsec): min=5557, max=56860, avg=9864.70, stdev=5365.72 00:35:09.161 clat (msec): min=177, max=340, avg=227.65, stdev=29.62 00:35:09.161 lat (msec): min=177, max=340, avg=227.66, stdev=29.62 00:35:09.161 clat percentiles (msec): 00:35:09.161 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 211], 20.00th=[ 215], 00:35:09.161 | 30.00th=[ 220], 40.00th=[ 222], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.161 | 70.00th=[ 226], 80.00th=[ 226], 90.00th=[ 249], 95.00th=[ 317], 00:35:09.161 | 99.00th=[ 326], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:35:09.161 | 99.99th=[ 342] 00:35:09.161 bw ( KiB/s): min= 128, max= 336, per=4.26%, avg=278.40, stdev=49.90, samples=20 00:35:09.161 iops : min= 32, max= 84, avg=69.60, stdev=12.47, samples=20 00:35:09.161 lat (msec) : 250=90.08%, 500=9.92% 00:35:09.161 cpu : usr=98.73%, sys=0.85%, ctx=31, majf=0, minf=9 00:35:09.161 IO depths : 1=0.8%, 2=2.0%, 4=9.5%, 8=75.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 complete : 0=0.0%, 4=89.6%, 8=4.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 issued rwts: total=706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.161 filename2: (groupid=0, jobs=1): err= 0: pid=1890780: Tue Dec 10 12:43:29 2024 00:35:09.161 read: IOPS=71, BW=286KiB/s (293kB/s)(2880KiB/10067msec) 00:35:09.161 slat (nsec): min=4697, max=59290, avg=10580.07, stdev=6364.94 00:35:09.161 clat (msec): min=92, max=400, avg=223.60, stdev=27.49 00:35:09.161 lat (msec): min=92, max=400, avg=223.61, stdev=27.49 00:35:09.161 clat percentiles (msec): 00:35:09.161 | 1.00th=[ 163], 5.00th=[ 192], 10.00th=[ 211], 20.00th=[ 215], 00:35:09.161 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 222], 00:35:09.161 | 70.00th=[ 224], 80.00th=[ 226], 90.00th=[ 228], 95.00th=[ 288], 00:35:09.161 | 99.00th=[ 321], 99.50th=[ 321], 99.90th=[ 401], 99.95th=[ 401], 00:35:09.161 | 99.99th=[ 401] 00:35:09.161 bw ( KiB/s): min= 128, max= 368, per=4.30%, avg=281.60, stdev=59.51, samples=20 00:35:09.161 iops : min= 32, max= 92, avg=70.40, stdev=14.88, samples=20 00:35:09.161 lat (msec) : 100=0.28%, 250=93.06%, 500=6.67% 00:35:09.161 cpu : usr=98.67%, sys=0.95%, ctx=9, majf=0, minf=9 00:35:09.161 IO depths : 1=0.7%, 2=6.9%, 4=25.0%, 8=55.6%, 16=11.8%, 32=0.0%, >=64=0.0% 00:35:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.161 filename2: (groupid=0, jobs=1): err= 0: pid=1890781: Tue Dec 10 12:43:29 2024 00:35:09.161 read: IOPS=66, BW=264KiB/s (270kB/s)(2656KiB/10059msec) 00:35:09.161 slat (nsec): min=4383, max=28321, avg=9380.92, stdev=3417.29 00:35:09.161 clat (msec): min=183, max=570, avg=241.91, stdev=60.09 00:35:09.161 lat (msec): min=183, max=570, avg=241.92, stdev=60.09 00:35:09.161 clat percentiles (msec): 00:35:09.161 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 199], 00:35:09.161 | 30.00th=[ 215], 40.00th=[ 218], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.161 | 70.00th=[ 255], 80.00th=[ 259], 90.00th=[ 326], 95.00th=[ 342], 00:35:09.161 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 567], 99.95th=[ 567], 00:35:09.161 | 99.99th=[ 567] 00:35:09.161 bw ( KiB/s): min= 112, max= 304, per=3.97%, avg=259.20, stdev=43.62, samples=20 00:35:09.161 iops : min= 28, max= 76, avg=64.80, stdev=10.90, samples=20 00:35:09.161 lat (msec) : 250=64.76%, 500=34.94%, 750=0.30% 00:35:09.161 cpu : usr=98.82%, sys=0.81%, ctx=13, majf=0, minf=9 00:35:09.161 IO depths : 1=0.6%, 2=1.8%, 4=8.7%, 8=76.2%, 16=12.7%, 32=0.0%, >=64=0.0% 00:35:09.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 complete : 0=0.0%, 4=89.2%, 8=6.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.161 issued rwts: total=664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.161 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.161 filename2: (groupid=0, jobs=1): err= 0: pid=1890782: Tue Dec 10 12:43:29 2024 00:35:09.162 read: IOPS=73, BW=294KiB/s (301kB/s)(2968KiB/10095msec) 00:35:09.162 slat (nsec): min=9530, max=48849, avg=18626.51, stdev=5679.55 00:35:09.162 clat (msec): min=66, max=334, avg=217.04, stdev=38.01 00:35:09.162 lat (msec): min=66, max=334, avg=217.06, stdev=38.01 00:35:09.162 clat percentiles (msec): 00:35:09.162 | 1.00th=[ 78], 5.00th=[ 159], 10.00th=[ 203], 20.00th=[ 213], 00:35:09.162 | 30.00th=[ 218], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 224], 00:35:09.162 | 70.00th=[ 226], 80.00th=[ 226], 90.00th=[ 228], 95.00th=[ 275], 00:35:09.162 | 99.00th=[ 321], 99.50th=[ 326], 99.90th=[ 334], 99.95th=[ 334], 00:35:09.162 | 99.99th=[ 334] 00:35:09.162 bw ( KiB/s): min= 224, max= 384, per=4.44%, avg=290.40, stdev=51.72, samples=20 00:35:09.162 iops : min= 56, max= 96, avg=72.60, stdev=12.93, samples=20 00:35:09.162 lat (msec) : 100=4.31%, 250=89.49%, 500=6.20% 00:35:09.162 cpu : usr=98.54%, sys=1.06%, ctx=12, majf=0, minf=9 00:35:09.162 IO depths : 1=2.0%, 2=4.7%, 4=14.2%, 8=68.6%, 16=10.5%, 32=0.0%, >=64=0.0% 00:35:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.162 complete : 0=0.0%, 4=91.0%, 8=3.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.162 issued rwts: total=742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.162 filename2: (groupid=0, jobs=1): err= 0: pid=1890783: Tue Dec 10 12:43:29 2024 00:35:09.162 read: IOPS=47, BW=192KiB/s (196kB/s)(1920KiB/10011msec) 00:35:09.162 slat (nsec): min=6949, max=71014, avg=17289.15, stdev=18957.67 00:35:09.162 clat (msec): min=198, max=593, avg=333.53, stdev=53.86 00:35:09.162 lat (msec): min=198, max=593, avg=333.54, stdev=53.86 00:35:09.162 clat percentiles (msec): 00:35:09.162 | 1.00th=[ 213], 5.00th=[ 222], 10.00th=[ 296], 20.00th=[ 309], 00:35:09.162 | 30.00th=[ 321], 40.00th=[ 321], 50.00th=[ 326], 60.00th=[ 342], 00:35:09.162 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 359], 95.00th=[ 451], 00:35:09.162 | 99.00th=[ 498], 99.50th=[ 498], 99.90th=[ 592], 99.95th=[ 592], 00:35:09.162 | 99.99th=[ 592] 00:35:09.162 bw ( KiB/s): min= 112, max= 256, per=2.83%, avg=185.60, stdev=59.51, samples=20 00:35:09.162 iops : min= 28, max= 64, avg=46.40, stdev=14.88, samples=20 00:35:09.162 lat (msec) : 250=6.25%, 500=93.33%, 750=0.42% 00:35:09.162 cpu : usr=98.70%, sys=0.91%, ctx=32, majf=0, minf=9 00:35:09.162 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:35:09.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.162 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:09.162 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:09.162 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:09.162 00:35:09.162 Run status group 0 (all jobs): 00:35:09.162 READ: bw=6530KiB/s (6687kB/s), 192KiB/s-304KiB/s (196kB/s-312kB/s), io=64.4MiB (67.5MB), run=10011-10096msec 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 bdev_null0 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 [2024-12-10 12:43:30.116721] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 bdev_null1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:09.162 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:09.163 { 00:35:09.163 "params": { 00:35:09.163 "name": "Nvme$subsystem", 00:35:09.163 "trtype": "$TEST_TRANSPORT", 00:35:09.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:09.163 "adrfam": "ipv4", 00:35:09.163 "trsvcid": "$NVMF_PORT", 00:35:09.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:09.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:09.163 "hdgst": ${hdgst:-false}, 00:35:09.163 "ddgst": ${ddgst:-false} 00:35:09.163 }, 00:35:09.163 "method": "bdev_nvme_attach_controller" 00:35:09.163 } 00:35:09.163 EOF 00:35:09.163 )") 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:09.163 { 00:35:09.163 "params": { 00:35:09.163 "name": "Nvme$subsystem", 00:35:09.163 "trtype": "$TEST_TRANSPORT", 00:35:09.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:09.163 "adrfam": "ipv4", 00:35:09.163 "trsvcid": "$NVMF_PORT", 00:35:09.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:09.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:09.163 "hdgst": ${hdgst:-false}, 00:35:09.163 "ddgst": ${ddgst:-false} 00:35:09.163 }, 00:35:09.163 "method": "bdev_nvme_attach_controller" 00:35:09.163 } 00:35:09.163 EOF 00:35:09.163 )") 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:09.163 "params": { 00:35:09.163 "name": "Nvme0", 00:35:09.163 "trtype": "tcp", 00:35:09.163 "traddr": "10.0.0.2", 00:35:09.163 "adrfam": "ipv4", 00:35:09.163 "trsvcid": "4420", 00:35:09.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:09.163 "hdgst": false, 00:35:09.163 "ddgst": false 00:35:09.163 }, 00:35:09.163 "method": "bdev_nvme_attach_controller" 00:35:09.163 },{ 00:35:09.163 "params": { 00:35:09.163 "name": "Nvme1", 00:35:09.163 "trtype": "tcp", 00:35:09.163 "traddr": "10.0.0.2", 00:35:09.163 "adrfam": "ipv4", 00:35:09.163 "trsvcid": "4420", 00:35:09.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:09.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:09.163 "hdgst": false, 00:35:09.163 "ddgst": false 00:35:09.163 }, 00:35:09.163 "method": "bdev_nvme_attach_controller" 00:35:09.163 }' 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:35:09.163 12:43:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:09.163 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:09.163 ... 00:35:09.163 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:09.163 ... 00:35:09.163 fio-3.35 00:35:09.163 Starting 4 threads 00:35:14.432 00:35:14.432 filename0: (groupid=0, jobs=1): err= 0: pid=1892718: Tue Dec 10 12:43:36 2024 00:35:14.432 read: IOPS=2626, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:35:14.432 slat (nsec): min=6221, max=44208, avg=11160.68, stdev=4573.36 00:35:14.432 clat (usec): min=512, max=5872, avg=3011.11, stdev=471.40 00:35:14.432 lat (usec): min=524, max=5891, avg=3022.27, stdev=471.47 00:35:14.432 clat percentiles (usec): 00:35:14.432 | 1.00th=[ 1844], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2704], 00:35:14.432 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 3032], 60.00th=[ 3097], 00:35:14.432 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3785], 00:35:14.432 | 99.00th=[ 4686], 99.50th=[ 4948], 99.90th=[ 5538], 99.95th=[ 5604], 00:35:14.432 | 99.99th=[ 5866] 00:35:14.432 bw ( KiB/s): min=20112, max=22304, per=25.22%, avg=21031.11, stdev=671.15, samples=9 00:35:14.432 iops : min= 2514, max= 2788, avg=2628.89, stdev=83.89, samples=9 00:35:14.432 lat (usec) : 750=0.02%, 1000=0.02% 00:35:14.432 lat (msec) : 2=1.68%, 4=95.06%, 10=3.22% 00:35:14.432 cpu : usr=92.50%, sys=5.02%, ctx=498, majf=0, minf=9 00:35:14.432 IO depths : 1=0.6%, 2=8.1%, 4=63.4%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.432 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.432 issued rwts: total=13136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:14.432 filename0: (groupid=0, jobs=1): err= 0: pid=1892719: Tue Dec 10 12:43:36 2024 00:35:14.432 read: IOPS=2589, BW=20.2MiB/s (21.2MB/s)(101MiB/5003msec) 00:35:14.432 slat (nsec): min=6268, max=38365, avg=10549.42, stdev=3876.17 00:35:14.432 clat (usec): min=613, max=5689, avg=3055.57, stdev=513.92 00:35:14.432 lat (usec): min=625, max=5700, avg=3066.12, stdev=513.75 00:35:14.432 clat percentiles (usec): 00:35:14.432 | 1.00th=[ 1876], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2737], 00:35:14.432 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:35:14.432 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3621], 95.00th=[ 4015], 00:35:14.432 | 99.00th=[ 4948], 99.50th=[ 5211], 99.90th=[ 5604], 99.95th=[ 5604], 00:35:14.432 | 99.99th=[ 5669] 00:35:14.432 bw ( KiB/s): min=19744, max=21488, per=24.86%, avg=20730.40, stdev=593.92, samples=10 00:35:14.432 iops : min= 2468, max= 2686, avg=2591.30, stdev=74.24, samples=10 00:35:14.432 lat (usec) : 750=0.04%, 1000=0.07% 00:35:14.432 lat (msec) : 2=1.41%, 4=93.29%, 10=5.19% 00:35:14.432 cpu : usr=96.02%, sys=3.32%, ctx=223, majf=0, minf=9 00:35:14.432 IO depths : 1=0.2%, 2=9.4%, 4=61.8%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.432 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.432 issued rwts: total=12954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.432 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:14.432 filename1: (groupid=0, jobs=1): err= 0: pid=1892720: Tue Dec 10 12:43:36 2024 00:35:14.432 read: IOPS=2602, BW=20.3MiB/s (21.3MB/s)(102MiB/5002msec) 00:35:14.432 slat (nsec): min=6256, max=38524, avg=10877.52, stdev=4060.10 00:35:14.432 clat (usec): min=589, max=5623, avg=3039.36, stdev=488.15 00:35:14.432 lat (usec): min=600, max=5634, avg=3050.24, stdev=488.02 00:35:14.432 clat percentiles (usec): 00:35:14.432 | 1.00th=[ 1893], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2737], 00:35:14.432 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:35:14.432 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3949], 00:35:14.432 | 99.00th=[ 4752], 99.50th=[ 5014], 99.90th=[ 5342], 99.95th=[ 5473], 00:35:14.432 | 99.99th=[ 5604] 00:35:14.432 bw ( KiB/s): min=20080, max=21680, per=25.06%, avg=20899.56, stdev=506.90, samples=9 00:35:14.432 iops : min= 2510, max= 2710, avg=2612.44, stdev=63.36, samples=9 00:35:14.432 lat (usec) : 750=0.05%, 1000=0.05% 00:35:14.432 lat (msec) : 2=1.43%, 4=93.93%, 10=4.54% 00:35:14.432 cpu : usr=96.94%, sys=2.72%, ctx=6, majf=0, minf=9 00:35:14.433 IO depths : 1=0.1%, 2=9.4%, 4=61.7%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.433 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.433 issued rwts: total=13017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:14.433 filename1: (groupid=0, jobs=1): err= 0: pid=1892721: Tue Dec 10 12:43:36 2024 00:35:14.433 read: IOPS=2608, BW=20.4MiB/s (21.4MB/s)(102MiB/5002msec) 00:35:14.433 slat (nsec): min=6255, max=33759, avg=10053.60, stdev=3676.91 00:35:14.433 clat (usec): min=541, max=5865, avg=3034.79, stdev=495.34 00:35:14.433 lat (usec): min=554, max=5872, avg=3044.84, stdev=495.29 00:35:14.433 clat percentiles (usec): 00:35:14.433 | 1.00th=[ 1811], 5.00th=[ 2311], 10.00th=[ 2507], 20.00th=[ 2737], 00:35:14.433 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:35:14.433 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3523], 95.00th=[ 3884], 00:35:14.433 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 5604], 99.95th=[ 5604], 00:35:14.433 | 99.99th=[ 5735] 00:35:14.433 bw ( KiB/s): min=19959, max=21760, per=25.02%, avg=20869.50, stdev=547.39, samples=10 00:35:14.433 iops : min= 2494, max= 2720, avg=2608.60, stdev=68.59, samples=10 00:35:14.433 lat (usec) : 750=0.05%, 1000=0.01% 00:35:14.433 lat (msec) : 2=1.82%, 4=93.89%, 10=4.23% 00:35:14.433 cpu : usr=96.44%, sys=3.22%, ctx=9, majf=0, minf=9 00:35:14.433 IO depths : 1=0.5%, 2=10.0%, 4=61.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.433 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.433 issued rwts: total=13048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.433 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:14.433 00:35:14.433 Run status group 0 (all jobs): 00:35:14.433 READ: bw=81.4MiB/s (85.4MB/s), 20.2MiB/s-20.5MiB/s (21.2MB/s-21.5MB/s), io=407MiB (427MB), run=5001-5003msec 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.433 00:35:14.433 real 0m24.576s 00:35:14.433 user 4m53.268s 00:35:14.433 sys 0m4.764s 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.433 12:43:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:14.433 ************************************ 00:35:14.433 END TEST fio_dif_rand_params 00:35:14.433 ************************************ 00:35:14.433 12:43:36 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:14.433 12:43:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:14.433 12:43:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:14.433 12:43:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:14.433 ************************************ 00:35:14.433 START TEST fio_dif_digest 00:35:14.433 ************************************ 00:35:14.433 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:14.433 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:14.433 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:14.433 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:14.692 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:14.692 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:14.692 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:14.692 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.693 bdev_null0 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.693 [2024-12-10 12:43:36.629264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:14.693 { 00:35:14.693 "params": { 00:35:14.693 "name": "Nvme$subsystem", 00:35:14.693 "trtype": "$TEST_TRANSPORT", 00:35:14.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:14.693 "adrfam": "ipv4", 00:35:14.693 "trsvcid": "$NVMF_PORT", 00:35:14.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:14.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:14.693 "hdgst": ${hdgst:-false}, 00:35:14.693 "ddgst": ${ddgst:-false} 00:35:14.693 }, 00:35:14.693 "method": "bdev_nvme_attach_controller" 00:35:14.693 } 00:35:14.693 EOF 00:35:14.693 )") 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:14.693 "params": { 00:35:14.693 "name": "Nvme0", 00:35:14.693 "trtype": "tcp", 00:35:14.693 "traddr": "10.0.0.2", 00:35:14.693 "adrfam": "ipv4", 00:35:14.693 "trsvcid": "4420", 00:35:14.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.693 "hdgst": true, 00:35:14.693 "ddgst": true 00:35:14.693 }, 00:35:14.693 "method": "bdev_nvme_attach_controller" 00:35:14.693 }' 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:35:14.693 12:43:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:14.952 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:14.952 ... 00:35:14.952 fio-3.35 00:35:14.952 Starting 3 threads 00:35:27.158 00:35:27.158 filename0: (groupid=0, jobs=1): err= 0: pid=1893931: Tue Dec 10 12:43:47 2024 00:35:27.158 read: IOPS=285, BW=35.6MiB/s (37.4MB/s)(358MiB/10047msec) 00:35:27.158 slat (nsec): min=6552, max=51429, avg=14344.59, stdev=6161.71 00:35:27.158 clat (usec): min=8046, max=55013, avg=10488.24, stdev=1924.61 00:35:27.158 lat (usec): min=8059, max=55040, avg=10502.58, stdev=1924.37 00:35:27.158 clat percentiles (usec): 00:35:27.158 | 1.00th=[ 8717], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9765], 00:35:27.158 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:35:27.158 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11731], 00:35:27.158 | 99.00th=[12387], 99.50th=[12649], 99.90th=[54789], 99.95th=[54789], 00:35:27.158 | 99.99th=[54789] 00:35:27.158 bw ( KiB/s): min=33280, max=38144, per=35.95%, avg=36646.40, stdev=1105.86, samples=20 00:35:27.158 iops : min= 260, max= 298, avg=286.30, stdev= 8.64, samples=20 00:35:27.158 lat (msec) : 10=29.42%, 20=70.40%, 50=0.03%, 100=0.14% 00:35:27.158 cpu : usr=94.79%, sys=4.89%, ctx=14, majf=0, minf=91 00:35:27.158 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.158 issued rwts: total=2865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.158 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:27.158 filename0: (groupid=0, jobs=1): err= 0: pid=1893932: Tue Dec 10 12:43:47 2024 00:35:27.158 read: IOPS=252, BW=31.5MiB/s (33.1MB/s)(317MiB/10045msec) 00:35:27.158 slat (nsec): min=6570, max=63916, avg=14196.87, stdev=4144.33 00:35:27.158 clat (usec): min=6865, max=47689, avg=11864.96, stdev=1301.00 00:35:27.158 lat (usec): min=6878, max=47708, avg=11879.16, stdev=1300.88 00:35:27.158 clat percentiles (usec): 00:35:27.158 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:35:27.158 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:35:27.158 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13173], 00:35:27.158 | 99.00th=[13960], 99.50th=[14222], 99.90th=[14615], 99.95th=[45876], 00:35:27.158 | 99.99th=[47449] 00:35:27.158 bw ( KiB/s): min=31488, max=33280, per=31.78%, avg=32396.80, stdev=402.42, samples=20 00:35:27.158 iops : min= 246, max= 260, avg=253.10, stdev= 3.14, samples=20 00:35:27.158 lat (msec) : 10=1.42%, 20=98.50%, 50=0.08% 00:35:27.158 cpu : usr=95.11%, sys=4.58%, ctx=15, majf=0, minf=81 00:35:27.158 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.158 issued rwts: total=2533,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.158 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:27.158 filename0: (groupid=0, jobs=1): err= 0: pid=1893933: Tue Dec 10 12:43:47 2024 00:35:27.158 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(325MiB/10045msec) 00:35:27.158 slat (nsec): min=6565, max=51963, avg=14239.25, stdev=5625.17 00:35:27.158 clat (usec): min=6865, max=50831, avg=11544.14, stdev=1325.87 00:35:27.158 lat (usec): min=6873, max=50842, avg=11558.38, stdev=1326.00 00:35:27.158 clat percentiles (usec): 00:35:27.158 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:35:27.158 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:35:27.158 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:35:27.158 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14484], 99.95th=[47449], 00:35:27.158 | 99.99th=[50594] 00:35:27.158 bw ( KiB/s): min=32512, max=34304, per=32.66%, avg=33292.80, stdev=553.91, samples=20 00:35:27.158 iops : min= 254, max= 268, avg=260.10, stdev= 4.33, samples=20 00:35:27.158 lat (msec) : 10=2.96%, 20=96.97%, 50=0.04%, 100=0.04% 00:35:27.158 cpu : usr=94.96%, sys=4.72%, ctx=18, majf=0, minf=33 00:35:27.158 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:27.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:27.158 issued rwts: total=2603,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:27.159 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:27.159 00:35:27.159 Run status group 0 (all jobs): 00:35:27.159 READ: bw=99.5MiB/s (104MB/s), 31.5MiB/s-35.6MiB/s (33.1MB/s-37.4MB/s), io=1000MiB (1049MB), run=10045-10047msec 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.159 00:35:27.159 real 0m11.123s 00:35:27.159 user 0m35.512s 00:35:27.159 sys 0m1.748s 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:27.159 12:43:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:27.159 ************************************ 00:35:27.159 END TEST fio_dif_digest 00:35:27.159 ************************************ 00:35:27.159 12:43:47 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:27.159 12:43:47 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:27.159 rmmod nvme_tcp 00:35:27.159 rmmod nvme_fabrics 00:35:27.159 rmmod nvme_keyring 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1885415 ']' 00:35:27.159 12:43:47 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1885415 00:35:27.159 12:43:47 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1885415 ']' 00:35:27.159 12:43:47 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1885415 00:35:27.159 12:43:47 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:27.159 12:43:47 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:27.159 12:43:47 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1885415 00:35:27.159 12:43:47 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:27.159 12:43:47 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:27.159 12:43:47 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1885415' 00:35:27.159 killing process with pid 1885415 00:35:27.159 12:43:47 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1885415 00:35:27.159 12:43:47 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1885415 00:35:27.159 12:43:48 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:27.159 12:43:48 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:35:28.538 Waiting for block devices as requested 00:35:28.797 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:28.797 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:28.797 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:29.060 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:29.060 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:29.060 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:29.320 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:29.320 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:29.320 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:29.320 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:29.580 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:29.580 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:29.580 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:29.839 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:29.839 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:29.839 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:30.098 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:30.098 12:43:52 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:30.098 12:43:52 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:30.098 12:43:52 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:30.098 12:43:52 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:30.098 12:43:52 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:30.098 12:43:52 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:30.098 12:43:52 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:30.098 12:43:52 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:30.098 12:43:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.098 12:43:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:30.098 12:43:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.635 12:43:54 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:32.635 00:35:32.635 real 1m14.059s 00:35:32.635 user 7m11.112s 00:35:32.635 sys 0m19.888s 00:35:32.635 12:43:54 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.635 12:43:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:32.635 ************************************ 00:35:32.635 END TEST nvmf_dif 00:35:32.635 ************************************ 00:35:32.635 12:43:54 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:32.635 12:43:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:32.635 12:43:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.635 12:43:54 -- common/autotest_common.sh@10 -- # set +x 00:35:32.635 ************************************ 00:35:32.635 START TEST nvmf_abort_qd_sizes 00:35:32.635 ************************************ 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:32.635 * Looking for test storage... 00:35:32.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:32.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.635 --rc genhtml_branch_coverage=1 00:35:32.635 --rc genhtml_function_coverage=1 00:35:32.635 --rc genhtml_legend=1 00:35:32.635 --rc geninfo_all_blocks=1 00:35:32.635 --rc geninfo_unexecuted_blocks=1 00:35:32.635 00:35:32.635 ' 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:32.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.635 --rc genhtml_branch_coverage=1 00:35:32.635 --rc genhtml_function_coverage=1 00:35:32.635 --rc genhtml_legend=1 00:35:32.635 --rc geninfo_all_blocks=1 00:35:32.635 --rc geninfo_unexecuted_blocks=1 00:35:32.635 00:35:32.635 ' 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:32.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.635 --rc genhtml_branch_coverage=1 00:35:32.635 --rc genhtml_function_coverage=1 00:35:32.635 --rc genhtml_legend=1 00:35:32.635 --rc geninfo_all_blocks=1 00:35:32.635 --rc geninfo_unexecuted_blocks=1 00:35:32.635 00:35:32.635 ' 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:32.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.635 --rc genhtml_branch_coverage=1 00:35:32.635 --rc genhtml_function_coverage=1 00:35:32.635 --rc genhtml_legend=1 00:35:32.635 --rc geninfo_all_blocks=1 00:35:32.635 --rc geninfo_unexecuted_blocks=1 00:35:32.635 00:35:32.635 ' 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.635 12:43:54 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:32.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:32.636 12:43:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:37.918 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:37.918 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.918 12:43:59 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:37.918 Found net devices under 0000:86:00.0: cvl_0_0 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:37.918 Found net devices under 0000:86:00.1: cvl_0_1 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:37.918 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:38.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:35:38.177 00:35:38.177 --- 10.0.0.2 ping statistics --- 00:35:38.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.177 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:38.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:35:38.177 00:35:38.177 --- 10.0.0.1 ping statistics --- 00:35:38.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.177 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:38.177 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:38.178 12:44:00 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:35:41.468 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:41.468 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:42.036 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1901731 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1901731 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1901731 ']' 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:42.036 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.036 [2024-12-10 12:44:04.182987] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:35:42.036 [2024-12-10 12:44:04.183040] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:42.295 [2024-12-10 12:44:04.263959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:42.295 [2024-12-10 12:44:04.307994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:42.295 [2024-12-10 12:44:04.308031] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:42.295 [2024-12-10 12:44:04.308039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:42.295 [2024-12-10 12:44:04.308046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:42.295 [2024-12-10 12:44:04.308052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:42.295 [2024-12-10 12:44:04.309492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:42.295 [2024-12-10 12:44:04.309602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:42.295 [2024-12-10 12:44:04.309689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.295 [2024-12-10 12:44:04.309690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:42.295 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:42.553 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:42.553 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:42.553 12:44:04 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:42.553 12:44:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:42.553 12:44:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:42.553 12:44:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:42.553 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:42.553 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:42.553 12:44:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:42.553 ************************************ 00:35:42.553 START TEST spdk_target_abort 00:35:42.553 ************************************ 00:35:42.553 12:44:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:42.553 12:44:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:42.553 12:44:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:42.553 12:44:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.553 12:44:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.845 spdk_targetn1 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.845 [2024-12-10 12:44:07.331983] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:45.845 [2024-12-10 12:44:07.380297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:45.845 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.846 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:45.846 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.846 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:45.846 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.846 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:45.846 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:45.846 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:45.846 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:45.846 12:44:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:49.127 Initializing NVMe Controllers 00:35:49.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:49.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:49.127 Initialization complete. Launching workers. 00:35:49.127 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15152, failed: 0 00:35:49.127 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1333, failed to submit 13819 00:35:49.127 success 694, unsuccessful 639, failed 0 00:35:49.127 12:44:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:49.127 12:44:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:52.409 Initializing NVMe Controllers 00:35:52.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:52.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:52.409 Initialization complete. Launching workers. 00:35:52.409 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8610, failed: 0 00:35:52.409 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7369 00:35:52.409 success 311, unsuccessful 930, failed 0 00:35:52.409 12:44:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:52.409 12:44:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:54.937 Initializing NVMe Controllers 00:35:54.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:54.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:54.937 Initialization complete. Launching workers. 00:35:54.937 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37999, failed: 0 00:35:54.937 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2888, failed to submit 35111 00:35:54.937 success 563, unsuccessful 2325, failed 0 00:35:54.937 12:44:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:54.937 12:44:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.937 12:44:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:54.937 12:44:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.937 12:44:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:54.937 12:44:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.937 12:44:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1901731 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1901731 ']' 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1901731 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1901731 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1901731' 00:35:56.310 killing process with pid 1901731 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1901731 00:35:56.310 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1901731 00:35:56.569 00:35:56.569 real 0m14.100s 00:35:56.569 user 0m53.692s 00:35:56.569 sys 0m2.653s 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:56.569 ************************************ 00:35:56.569 END TEST spdk_target_abort 00:35:56.569 ************************************ 00:35:56.569 12:44:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:56.569 12:44:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:56.569 12:44:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.569 12:44:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:56.569 ************************************ 00:35:56.569 START TEST kernel_target_abort 00:35:56.569 ************************************ 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:56.569 12:44:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:35:59.924 Waiting for block devices as requested 00:35:59.924 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:59.924 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:59.924 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:59.924 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:59.924 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:59.924 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:59.924 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:59.924 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:59.924 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:00.208 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:00.208 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:00.208 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:00.208 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:00.475 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:00.475 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:00.475 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:00.734 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:36:00.734 No valid GPT data, bailing 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:00.734 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:36:00.994 00:36:00.994 Discovery Log Number of Records 2, Generation counter 2 00:36:00.994 =====Discovery Log Entry 0====== 00:36:00.994 trtype: tcp 00:36:00.994 adrfam: ipv4 00:36:00.994 subtype: current discovery subsystem 00:36:00.994 treq: not specified, sq flow control disable supported 00:36:00.994 portid: 1 00:36:00.994 trsvcid: 4420 00:36:00.994 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:00.994 traddr: 10.0.0.1 00:36:00.994 eflags: none 00:36:00.994 sectype: none 00:36:00.994 =====Discovery Log Entry 1====== 00:36:00.994 trtype: tcp 00:36:00.994 adrfam: ipv4 00:36:00.994 subtype: nvme subsystem 00:36:00.994 treq: not specified, sq flow control disable supported 00:36:00.994 portid: 1 00:36:00.994 trsvcid: 4420 00:36:00.994 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:00.994 traddr: 10.0.0.1 00:36:00.994 eflags: none 00:36:00.994 sectype: none 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:00.994 12:44:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:04.282 Initializing NVMe Controllers 00:36:04.282 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:04.282 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:04.282 Initialization complete. Launching workers. 00:36:04.282 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 93184, failed: 0 00:36:04.282 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 93184, failed to submit 0 00:36:04.282 success 0, unsuccessful 93184, failed 0 00:36:04.282 12:44:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:04.282 12:44:26 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:07.568 Initializing NVMe Controllers 00:36:07.568 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:07.568 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:07.568 Initialization complete. Launching workers. 00:36:07.568 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146088, failed: 0 00:36:07.568 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36650, failed to submit 109438 00:36:07.568 success 0, unsuccessful 36650, failed 0 00:36:07.568 12:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:07.568 12:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:10.855 Initializing NVMe Controllers 00:36:10.855 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:10.855 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:10.855 Initialization complete. Launching workers. 00:36:10.855 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 136380, failed: 0 00:36:10.856 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34154, failed to submit 102226 00:36:10.856 success 0, unsuccessful 34154, failed 0 00:36:10.856 12:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:10.856 12:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:10.856 12:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:10.856 12:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:10.856 12:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:10.856 12:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:10.856 12:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:10.856 12:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:10.856 12:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:10.856 12:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:36:13.391 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:36:13.391 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:36:13.959 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:36:14.218 00:36:14.218 real 0m17.504s 00:36:14.218 user 0m9.032s 00:36:14.218 sys 0m5.126s 00:36:14.218 12:44:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:14.218 12:44:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:14.218 ************************************ 00:36:14.218 END TEST kernel_target_abort 00:36:14.218 ************************************ 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:14.218 rmmod nvme_tcp 00:36:14.218 rmmod nvme_fabrics 00:36:14.218 rmmod nvme_keyring 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1901731 ']' 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1901731 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1901731 ']' 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1901731 00:36:14.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (1901731) - No such process 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1901731 is not found' 00:36:14.218 Process with pid 1901731 is not found 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:14.218 12:44:36 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:36:16.753 Waiting for block devices as requested 00:36:17.013 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:17.013 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:17.272 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:17.272 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:17.272 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:17.272 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:17.531 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:17.531 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:17.531 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:17.790 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:17.790 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:17.790 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:18.049 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:18.049 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:18.049 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:18.049 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:18.307 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:18.307 12:44:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:20.843 12:44:42 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:20.843 00:36:20.843 real 0m48.181s 00:36:20.843 user 1m7.074s 00:36:20.843 sys 0m16.431s 00:36:20.843 12:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.843 12:44:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:20.843 ************************************ 00:36:20.843 END TEST nvmf_abort_qd_sizes 00:36:20.843 ************************************ 00:36:20.843 12:44:42 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/file.sh 00:36:20.843 12:44:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:20.843 12:44:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:20.843 12:44:42 -- common/autotest_common.sh@10 -- # set +x 00:36:20.843 ************************************ 00:36:20.843 START TEST keyring_file 00:36:20.843 ************************************ 00:36:20.843 12:44:42 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/file.sh 00:36:20.843 * Looking for test storage... 00:36:20.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring 00:36:20.843 12:44:42 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:20.843 12:44:42 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:36:20.843 12:44:42 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:20.843 12:44:42 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:20.843 12:44:42 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:20.843 12:44:42 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.843 --rc genhtml_branch_coverage=1 00:36:20.843 --rc genhtml_function_coverage=1 00:36:20.843 --rc genhtml_legend=1 00:36:20.843 --rc geninfo_all_blocks=1 00:36:20.843 --rc geninfo_unexecuted_blocks=1 00:36:20.843 00:36:20.843 ' 00:36:20.843 12:44:42 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.843 --rc genhtml_branch_coverage=1 00:36:20.843 --rc genhtml_function_coverage=1 00:36:20.843 --rc genhtml_legend=1 00:36:20.843 --rc geninfo_all_blocks=1 00:36:20.843 --rc geninfo_unexecuted_blocks=1 00:36:20.843 00:36:20.843 ' 00:36:20.843 12:44:42 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.843 --rc genhtml_branch_coverage=1 00:36:20.843 --rc genhtml_function_coverage=1 00:36:20.843 --rc genhtml_legend=1 00:36:20.843 --rc geninfo_all_blocks=1 00:36:20.843 --rc geninfo_unexecuted_blocks=1 00:36:20.843 00:36:20.843 ' 00:36:20.843 12:44:42 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:20.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:20.843 --rc genhtml_branch_coverage=1 00:36:20.843 --rc genhtml_function_coverage=1 00:36:20.843 --rc genhtml_legend=1 00:36:20.843 --rc geninfo_all_blocks=1 00:36:20.843 --rc geninfo_unexecuted_blocks=1 00:36:20.843 00:36:20.843 ' 00:36:20.843 12:44:42 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/common.sh 00:36:20.843 12:44:42 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:20.843 12:44:42 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:20.843 12:44:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.843 12:44:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.843 12:44:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.843 12:44:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:20.843 12:44:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:20.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:20.843 12:44:42 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:20.843 12:44:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PkWi71dDGE 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PkWi71dDGE 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PkWi71dDGE 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.PkWi71dDGE 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AwMHgxbecw 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:20.844 12:44:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AwMHgxbecw 00:36:20.844 12:44:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AwMHgxbecw 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.AwMHgxbecw 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=1910508 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:36:20.844 12:44:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1910508 00:36:20.844 12:44:42 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1910508 ']' 00:36:20.844 12:44:42 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.844 12:44:42 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:20.844 12:44:42 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.844 12:44:42 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:20.844 12:44:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:20.844 [2024-12-10 12:44:42.865215] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:36:20.844 [2024-12-10 12:44:42.865265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910508 ] 00:36:20.844 [2024-12-10 12:44:42.942312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.844 [2024-12-10 12:44:42.983541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:21.103 12:44:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:21.103 [2024-12-10 12:44:43.198916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.103 null0 00:36:21.103 [2024-12-10 12:44:43.230976] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:21.103 [2024-12-10 12:44:43.231332] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.103 12:44:43 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.103 12:44:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:21.103 [2024-12-10 12:44:43.263043] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:21.103 request: 00:36:21.103 { 00:36:21.103 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:21.103 "secure_channel": false, 00:36:21.103 "listen_address": { 00:36:21.103 "trtype": "tcp", 00:36:21.103 "traddr": "127.0.0.1", 00:36:21.103 "trsvcid": "4420" 00:36:21.103 }, 00:36:21.103 "method": "nvmf_subsystem_add_listener", 00:36:21.103 "req_id": 1 00:36:21.103 } 00:36:21.103 Got JSON-RPC error response 00:36:21.362 response: 00:36:21.362 { 00:36:21.362 "code": -32602, 00:36:21.362 "message": "Invalid parameters" 00:36:21.362 } 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:21.362 12:44:43 keyring_file -- keyring/file.sh@47 -- # bperfpid=1910515 00:36:21.362 12:44:43 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1910515 /var/tmp/bperf.sock 00:36:21.362 12:44:43 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1910515 ']' 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:21.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:21.362 [2024-12-10 12:44:43.314376] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:36:21.362 [2024-12-10 12:44:43.314418] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910515 ] 00:36:21.362 [2024-12-10 12:44:43.388479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.362 [2024-12-10 12:44:43.430374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.362 12:44:43 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:21.363 12:44:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PkWi71dDGE 00:36:21.363 12:44:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PkWi71dDGE 00:36:21.620 12:44:43 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AwMHgxbecw 00:36:21.620 12:44:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AwMHgxbecw 00:36:21.879 12:44:43 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:21.879 12:44:43 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:21.879 12:44:43 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:21.879 12:44:43 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:21.879 12:44:43 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.137 12:44:44 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.PkWi71dDGE == \/\t\m\p\/\t\m\p\.\P\k\W\i\7\1\d\D\G\E ]] 00:36:22.137 12:44:44 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:22.137 12:44:44 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:22.137 12:44:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.137 12:44:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.137 12:44:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.395 12:44:44 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.AwMHgxbecw == \/\t\m\p\/\t\m\p\.\A\w\M\H\g\x\b\e\c\w ]] 00:36:22.395 12:44:44 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:22.395 12:44:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.395 12:44:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.395 12:44:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.395 12:44:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:22.395 12:44:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.395 12:44:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:22.395 12:44:44 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:22.395 12:44:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:22.395 12:44:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.395 12:44:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.395 12:44:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.396 12:44:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.654 12:44:44 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:22.654 12:44:44 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:22.654 12:44:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:22.913 [2024-12-10 12:44:44.918497] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:22.913 nvme0n1 00:36:22.913 12:44:45 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:22.913 12:44:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.913 12:44:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.913 12:44:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.913 12:44:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:22.913 12:44:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.170 12:44:45 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:23.170 12:44:45 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:23.170 12:44:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:23.170 12:44:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:23.170 12:44:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:23.170 12:44:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:23.170 12:44:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:23.428 12:44:45 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:23.428 12:44:45 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:23.428 Running I/O for 1 seconds... 00:36:24.363 18815.00 IOPS, 73.50 MiB/s 00:36:24.363 Latency(us) 00:36:24.363 [2024-12-10T11:44:46.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.363 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:24.363 nvme0n1 : 1.00 18864.62 73.69 0.00 0.00 6773.24 4103.12 13620.09 00:36:24.363 [2024-12-10T11:44:46.531Z] =================================================================================================================== 00:36:24.363 [2024-12-10T11:44:46.531Z] Total : 18864.62 73.69 0.00 0.00 6773.24 4103.12 13620.09 00:36:24.363 { 00:36:24.363 "results": [ 00:36:24.363 { 00:36:24.363 "job": "nvme0n1", 00:36:24.363 "core_mask": "0x2", 00:36:24.363 "workload": "randrw", 00:36:24.363 "percentage": 50, 00:36:24.363 "status": "finished", 00:36:24.363 "queue_depth": 128, 00:36:24.363 "io_size": 4096, 00:36:24.363 "runtime": 1.004155, 00:36:24.363 "iops": 18864.617514228383, 00:36:24.363 "mibps": 73.68991216495462, 00:36:24.363 "io_failed": 0, 00:36:24.363 "io_timeout": 0, 00:36:24.363 "avg_latency_us": 6773.235377712084, 00:36:24.363 "min_latency_us": 4103.12347826087, 00:36:24.363 "max_latency_us": 13620.090434782609 00:36:24.363 } 00:36:24.363 ], 00:36:24.363 "core_count": 1 00:36:24.363 } 00:36:24.363 12:44:46 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:24.363 12:44:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:24.622 12:44:46 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:24.622 12:44:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:24.622 12:44:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:24.622 12:44:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:24.622 12:44:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:24.622 12:44:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:24.880 12:44:46 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:24.880 12:44:46 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:24.880 12:44:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:24.880 12:44:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:24.880 12:44:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:24.880 12:44:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:24.880 12:44:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.139 12:44:47 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:25.139 12:44:47 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:25.139 12:44:47 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:25.139 12:44:47 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:25.139 12:44:47 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:25.139 12:44:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:25.139 12:44:47 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:25.139 12:44:47 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:25.139 12:44:47 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:25.139 12:44:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:25.139 [2024-12-10 12:44:47.302829] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:25.139 [2024-12-10 12:44:47.303530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1546e90 (107): Transport endpoint is not connected 00:36:25.139 [2024-12-10 12:44:47.304524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1546e90 (9): Bad file descriptor 00:36:25.398 [2024-12-10 12:44:47.305526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:25.398 [2024-12-10 12:44:47.305537] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:25.398 [2024-12-10 12:44:47.305545] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:25.398 [2024-12-10 12:44:47.305559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:25.398 request: 00:36:25.398 { 00:36:25.398 "name": "nvme0", 00:36:25.398 "trtype": "tcp", 00:36:25.398 "traddr": "127.0.0.1", 00:36:25.398 "adrfam": "ipv4", 00:36:25.398 "trsvcid": "4420", 00:36:25.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:25.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:25.398 "prchk_reftag": false, 00:36:25.398 "prchk_guard": false, 00:36:25.398 "hdgst": false, 00:36:25.398 "ddgst": false, 00:36:25.398 "psk": "key1", 00:36:25.398 "allow_unrecognized_csi": false, 00:36:25.398 "method": "bdev_nvme_attach_controller", 00:36:25.398 "req_id": 1 00:36:25.398 } 00:36:25.398 Got JSON-RPC error response 00:36:25.398 response: 00:36:25.398 { 00:36:25.398 "code": -5, 00:36:25.398 "message": "Input/output error" 00:36:25.398 } 00:36:25.398 12:44:47 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:25.398 12:44:47 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:25.398 12:44:47 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:25.398 12:44:47 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:25.398 12:44:47 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:25.398 12:44:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:25.398 12:44:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.398 12:44:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.398 12:44:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:25.398 12:44:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.398 12:44:47 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:25.398 12:44:47 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:25.398 12:44:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:25.398 12:44:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:25.398 12:44:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.398 12:44:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:25.398 12:44:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.657 12:44:47 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:25.657 12:44:47 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:25.657 12:44:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:25.916 12:44:47 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:25.916 12:44:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:26.175 12:44:48 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:26.175 12:44:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.175 12:44:48 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:26.175 12:44:48 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:26.175 12:44:48 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.PkWi71dDGE 00:36:26.175 12:44:48 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.PkWi71dDGE 00:36:26.175 12:44:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:26.175 12:44:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.PkWi71dDGE 00:36:26.175 12:44:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:26.175 12:44:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:26.175 12:44:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:26.175 12:44:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:26.175 12:44:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PkWi71dDGE 00:36:26.175 12:44:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PkWi71dDGE 00:36:26.434 [2024-12-10 12:44:48.469607] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.PkWi71dDGE': 0100660 00:36:26.434 [2024-12-10 12:44:48.469632] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:26.434 request: 00:36:26.434 { 00:36:26.434 "name": "key0", 00:36:26.434 "path": "/tmp/tmp.PkWi71dDGE", 00:36:26.434 "method": "keyring_file_add_key", 00:36:26.434 "req_id": 1 00:36:26.434 } 00:36:26.434 Got JSON-RPC error response 00:36:26.434 response: 00:36:26.434 { 00:36:26.434 "code": -1, 00:36:26.434 "message": "Operation not permitted" 00:36:26.434 } 00:36:26.434 12:44:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:26.434 12:44:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:26.434 12:44:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:26.434 12:44:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:26.434 12:44:48 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.PkWi71dDGE 00:36:26.434 12:44:48 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.PkWi71dDGE 00:36:26.434 12:44:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.PkWi71dDGE 00:36:26.693 12:44:48 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.PkWi71dDGE 00:36:26.693 12:44:48 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:26.693 12:44:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:26.693 12:44:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:26.693 12:44:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:26.693 12:44:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:26.693 12:44:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:26.952 12:44:48 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:26.952 12:44:48 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.952 12:44:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:26.952 12:44:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.952 12:44:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:26.952 12:44:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:26.952 12:44:48 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:26.952 12:44:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:26.952 12:44:48 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.952 12:44:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:26.952 [2024-12-10 12:44:49.055162] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.PkWi71dDGE': No such file or directory 00:36:26.952 [2024-12-10 12:44:49.055180] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:26.952 [2024-12-10 12:44:49.055196] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:26.952 [2024-12-10 12:44:49.055203] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:26.952 [2024-12-10 12:44:49.055209] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:26.952 [2024-12-10 12:44:49.055216] bdev_nvme.c:6802:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:26.952 request: 00:36:26.952 { 00:36:26.952 "name": "nvme0", 00:36:26.952 "trtype": "tcp", 00:36:26.952 "traddr": "127.0.0.1", 00:36:26.952 "adrfam": "ipv4", 00:36:26.952 "trsvcid": "4420", 00:36:26.952 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:26.952 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:26.952 "prchk_reftag": false, 00:36:26.952 "prchk_guard": false, 00:36:26.952 "hdgst": false, 00:36:26.952 "ddgst": false, 00:36:26.952 "psk": "key0", 00:36:26.952 "allow_unrecognized_csi": false, 00:36:26.952 "method": "bdev_nvme_attach_controller", 00:36:26.952 "req_id": 1 00:36:26.952 } 00:36:26.952 Got JSON-RPC error response 00:36:26.952 response: 00:36:26.952 { 00:36:26.952 "code": -19, 00:36:26.952 "message": "No such device" 00:36:26.952 } 00:36:26.952 12:44:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:26.952 12:44:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:26.952 12:44:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:26.952 12:44:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:26.952 12:44:49 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:26.952 12:44:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:27.211 12:44:49 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:27.211 12:44:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:27.211 12:44:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:27.211 12:44:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:27.211 12:44:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:27.211 12:44:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:27.211 12:44:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1S9HLhtBSG 00:36:27.211 12:44:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:27.211 12:44:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:27.211 12:44:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:27.211 12:44:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:27.211 12:44:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:27.211 12:44:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:27.211 12:44:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:27.211 12:44:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1S9HLhtBSG 00:36:27.211 12:44:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1S9HLhtBSG 00:36:27.211 12:44:49 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.1S9HLhtBSG 00:36:27.211 12:44:49 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1S9HLhtBSG 00:36:27.211 12:44:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1S9HLhtBSG 00:36:27.470 12:44:49 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:27.470 12:44:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:27.729 nvme0n1 00:36:27.729 12:44:49 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:27.729 12:44:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:27.729 12:44:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:27.729 12:44:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:27.729 12:44:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:27.729 12:44:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.988 12:44:50 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:27.988 12:44:50 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:27.988 12:44:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:28.247 12:44:50 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:28.247 12:44:50 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:28.247 12:44:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.247 12:44:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:28.247 12:44:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.506 12:44:50 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:28.506 12:44:50 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:28.506 12:44:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:28.506 12:44:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:28.506 12:44:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:28.506 12:44:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:28.506 12:44:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:28.506 12:44:50 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:28.506 12:44:50 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:28.506 12:44:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:28.765 12:44:50 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:28.765 12:44:50 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:28.765 12:44:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:29.023 12:44:51 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:29.023 12:44:51 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1S9HLhtBSG 00:36:29.023 12:44:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1S9HLhtBSG 00:36:29.282 12:44:51 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.AwMHgxbecw 00:36:29.282 12:44:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.AwMHgxbecw 00:36:29.282 12:44:51 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:29.282 12:44:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:29.541 nvme0n1 00:36:29.541 12:44:51 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:29.541 12:44:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:29.800 12:44:51 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:29.800 "subsystems": [ 00:36:29.800 { 00:36:29.800 "subsystem": "keyring", 00:36:29.800 "config": [ 00:36:29.800 { 00:36:29.800 "method": "keyring_file_add_key", 00:36:29.800 "params": { 00:36:29.800 "name": "key0", 00:36:29.800 "path": "/tmp/tmp.1S9HLhtBSG" 00:36:29.800 } 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "method": "keyring_file_add_key", 00:36:29.800 "params": { 00:36:29.800 "name": "key1", 00:36:29.800 "path": "/tmp/tmp.AwMHgxbecw" 00:36:29.800 } 00:36:29.800 } 00:36:29.800 ] 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "subsystem": "iobuf", 00:36:29.800 "config": [ 00:36:29.800 { 00:36:29.800 "method": "iobuf_set_options", 00:36:29.800 "params": { 00:36:29.800 "small_pool_count": 8192, 00:36:29.800 "large_pool_count": 1024, 00:36:29.800 "small_bufsize": 8192, 00:36:29.800 "large_bufsize": 135168, 00:36:29.800 "enable_numa": false 00:36:29.800 } 00:36:29.800 } 00:36:29.800 ] 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "subsystem": "sock", 00:36:29.800 "config": [ 00:36:29.800 { 00:36:29.800 "method": "sock_set_default_impl", 00:36:29.800 "params": { 00:36:29.800 "impl_name": "posix" 00:36:29.800 } 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "method": "sock_impl_set_options", 00:36:29.800 "params": { 00:36:29.800 "impl_name": "ssl", 00:36:29.800 "recv_buf_size": 4096, 00:36:29.800 "send_buf_size": 4096, 00:36:29.800 "enable_recv_pipe": true, 00:36:29.800 "enable_quickack": false, 00:36:29.800 "enable_placement_id": 0, 00:36:29.800 "enable_zerocopy_send_server": true, 00:36:29.800 "enable_zerocopy_send_client": false, 00:36:29.800 "zerocopy_threshold": 0, 00:36:29.800 "tls_version": 0, 00:36:29.800 "enable_ktls": false 00:36:29.800 } 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "method": "sock_impl_set_options", 00:36:29.800 "params": { 00:36:29.800 "impl_name": "posix", 00:36:29.800 "recv_buf_size": 2097152, 00:36:29.800 "send_buf_size": 2097152, 00:36:29.800 "enable_recv_pipe": true, 00:36:29.800 "enable_quickack": false, 00:36:29.800 "enable_placement_id": 0, 00:36:29.800 "enable_zerocopy_send_server": true, 00:36:29.800 "enable_zerocopy_send_client": false, 00:36:29.800 "zerocopy_threshold": 0, 00:36:29.800 "tls_version": 0, 00:36:29.800 "enable_ktls": false 00:36:29.800 } 00:36:29.800 } 00:36:29.800 ] 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "subsystem": "vmd", 00:36:29.800 "config": [] 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "subsystem": "accel", 00:36:29.800 "config": [ 00:36:29.800 { 00:36:29.800 "method": "accel_set_options", 00:36:29.800 "params": { 00:36:29.800 "small_cache_size": 128, 00:36:29.800 "large_cache_size": 16, 00:36:29.800 "task_count": 2048, 00:36:29.800 "sequence_count": 2048, 00:36:29.800 "buf_count": 2048 00:36:29.800 } 00:36:29.800 } 00:36:29.800 ] 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "subsystem": "bdev", 00:36:29.800 "config": [ 00:36:29.800 { 00:36:29.800 "method": "bdev_set_options", 00:36:29.800 "params": { 00:36:29.800 "bdev_io_pool_size": 65535, 00:36:29.800 "bdev_io_cache_size": 256, 00:36:29.800 "bdev_auto_examine": true, 00:36:29.800 "iobuf_small_cache_size": 128, 00:36:29.800 "iobuf_large_cache_size": 16 00:36:29.800 } 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "method": "bdev_raid_set_options", 00:36:29.800 "params": { 00:36:29.800 "process_window_size_kb": 1024, 00:36:29.800 "process_max_bandwidth_mb_sec": 0 00:36:29.800 } 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "method": "bdev_iscsi_set_options", 00:36:29.800 "params": { 00:36:29.800 "timeout_sec": 30 00:36:29.800 } 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "method": "bdev_nvme_set_options", 00:36:29.800 "params": { 00:36:29.800 "action_on_timeout": "none", 00:36:29.800 "timeout_us": 0, 00:36:29.800 "timeout_admin_us": 0, 00:36:29.800 "keep_alive_timeout_ms": 10000, 00:36:29.800 "arbitration_burst": 0, 00:36:29.800 "low_priority_weight": 0, 00:36:29.800 "medium_priority_weight": 0, 00:36:29.800 "high_priority_weight": 0, 00:36:29.800 "nvme_adminq_poll_period_us": 10000, 00:36:29.800 "nvme_ioq_poll_period_us": 0, 00:36:29.800 "io_queue_requests": 512, 00:36:29.800 "delay_cmd_submit": true, 00:36:29.800 "transport_retry_count": 4, 00:36:29.800 "bdev_retry_count": 3, 00:36:29.800 "transport_ack_timeout": 0, 00:36:29.800 "ctrlr_loss_timeout_sec": 0, 00:36:29.800 "reconnect_delay_sec": 0, 00:36:29.800 "fast_io_fail_timeout_sec": 0, 00:36:29.800 "disable_auto_failback": false, 00:36:29.800 "generate_uuids": false, 00:36:29.800 "transport_tos": 0, 00:36:29.800 "nvme_error_stat": false, 00:36:29.800 "rdma_srq_size": 0, 00:36:29.800 "io_path_stat": false, 00:36:29.800 "allow_accel_sequence": false, 00:36:29.800 "rdma_max_cq_size": 0, 00:36:29.800 "rdma_cm_event_timeout_ms": 0, 00:36:29.800 "dhchap_digests": [ 00:36:29.800 "sha256", 00:36:29.800 "sha384", 00:36:29.800 "sha512" 00:36:29.800 ], 00:36:29.800 "dhchap_dhgroups": [ 00:36:29.800 "null", 00:36:29.800 "ffdhe2048", 00:36:29.800 "ffdhe3072", 00:36:29.800 "ffdhe4096", 00:36:29.800 "ffdhe6144", 00:36:29.800 "ffdhe8192" 00:36:29.800 ] 00:36:29.800 } 00:36:29.800 }, 00:36:29.800 { 00:36:29.800 "method": "bdev_nvme_attach_controller", 00:36:29.801 "params": { 00:36:29.801 "name": "nvme0", 00:36:29.801 "trtype": "TCP", 00:36:29.801 "adrfam": "IPv4", 00:36:29.801 "traddr": "127.0.0.1", 00:36:29.801 "trsvcid": "4420", 00:36:29.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:29.801 "prchk_reftag": false, 00:36:29.801 "prchk_guard": false, 00:36:29.801 "ctrlr_loss_timeout_sec": 0, 00:36:29.801 "reconnect_delay_sec": 0, 00:36:29.801 "fast_io_fail_timeout_sec": 0, 00:36:29.801 "psk": "key0", 00:36:29.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:29.801 "hdgst": false, 00:36:29.801 "ddgst": false, 00:36:29.801 "multipath": "multipath" 00:36:29.801 } 00:36:29.801 }, 00:36:29.801 { 00:36:29.801 "method": "bdev_nvme_set_hotplug", 00:36:29.801 "params": { 00:36:29.801 "period_us": 100000, 00:36:29.801 "enable": false 00:36:29.801 } 00:36:29.801 }, 00:36:29.801 { 00:36:29.801 "method": "bdev_wait_for_examine" 00:36:29.801 } 00:36:29.801 ] 00:36:29.801 }, 00:36:29.801 { 00:36:29.801 "subsystem": "nbd", 00:36:29.801 "config": [] 00:36:29.801 } 00:36:29.801 ] 00:36:29.801 }' 00:36:29.801 12:44:51 keyring_file -- keyring/file.sh@115 -- # killprocess 1910515 00:36:29.801 12:44:51 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1910515 ']' 00:36:29.801 12:44:51 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1910515 00:36:29.801 12:44:51 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:29.801 12:44:51 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:29.801 12:44:51 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1910515 00:36:30.060 12:44:52 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:30.060 12:44:52 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:30.060 12:44:52 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1910515' 00:36:30.060 killing process with pid 1910515 00:36:30.060 12:44:52 keyring_file -- common/autotest_common.sh@973 -- # kill 1910515 00:36:30.060 Received shutdown signal, test time was about 1.000000 seconds 00:36:30.060 00:36:30.060 Latency(us) 00:36:30.060 [2024-12-10T11:44:52.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:30.060 [2024-12-10T11:44:52.228Z] =================================================================================================================== 00:36:30.060 [2024-12-10T11:44:52.228Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:30.060 12:44:52 keyring_file -- common/autotest_common.sh@978 -- # wait 1910515 00:36:30.060 12:44:52 keyring_file -- keyring/file.sh@118 -- # bperfpid=1912034 00:36:30.060 12:44:52 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1912034 /var/tmp/bperf.sock 00:36:30.060 12:44:52 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1912034 ']' 00:36:30.060 12:44:52 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:30.060 12:44:52 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:30.060 12:44:52 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:30.060 12:44:52 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:30.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:30.060 12:44:52 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:30.060 "subsystems": [ 00:36:30.060 { 00:36:30.060 "subsystem": "keyring", 00:36:30.060 "config": [ 00:36:30.060 { 00:36:30.060 "method": "keyring_file_add_key", 00:36:30.060 "params": { 00:36:30.060 "name": "key0", 00:36:30.060 "path": "/tmp/tmp.1S9HLhtBSG" 00:36:30.060 } 00:36:30.060 }, 00:36:30.060 { 00:36:30.060 "method": "keyring_file_add_key", 00:36:30.060 "params": { 00:36:30.060 "name": "key1", 00:36:30.060 "path": "/tmp/tmp.AwMHgxbecw" 00:36:30.060 } 00:36:30.060 } 00:36:30.060 ] 00:36:30.060 }, 00:36:30.060 { 00:36:30.060 "subsystem": "iobuf", 00:36:30.060 "config": [ 00:36:30.060 { 00:36:30.060 "method": "iobuf_set_options", 00:36:30.060 "params": { 00:36:30.060 "small_pool_count": 8192, 00:36:30.060 "large_pool_count": 1024, 00:36:30.060 "small_bufsize": 8192, 00:36:30.060 "large_bufsize": 135168, 00:36:30.060 "enable_numa": false 00:36:30.060 } 00:36:30.060 } 00:36:30.060 ] 00:36:30.060 }, 00:36:30.060 { 00:36:30.060 "subsystem": "sock", 00:36:30.060 "config": [ 00:36:30.060 { 00:36:30.060 "method": "sock_set_default_impl", 00:36:30.060 "params": { 00:36:30.060 "impl_name": "posix" 00:36:30.060 } 00:36:30.060 }, 00:36:30.060 { 00:36:30.060 "method": "sock_impl_set_options", 00:36:30.060 "params": { 00:36:30.060 "impl_name": "ssl", 00:36:30.060 "recv_buf_size": 4096, 00:36:30.060 "send_buf_size": 4096, 00:36:30.060 "enable_recv_pipe": true, 00:36:30.060 "enable_quickack": false, 00:36:30.060 "enable_placement_id": 0, 00:36:30.060 "enable_zerocopy_send_server": true, 00:36:30.060 "enable_zerocopy_send_client": false, 00:36:30.060 "zerocopy_threshold": 0, 00:36:30.060 "tls_version": 0, 00:36:30.060 "enable_ktls": false 00:36:30.060 } 00:36:30.060 }, 00:36:30.060 { 00:36:30.060 "method": "sock_impl_set_options", 00:36:30.060 "params": { 00:36:30.060 "impl_name": "posix", 00:36:30.060 "recv_buf_size": 2097152, 00:36:30.060 "send_buf_size": 2097152, 00:36:30.060 "enable_recv_pipe": true, 00:36:30.060 "enable_quickack": false, 00:36:30.060 "enable_placement_id": 0, 00:36:30.060 "enable_zerocopy_send_server": true, 00:36:30.060 "enable_zerocopy_send_client": false, 00:36:30.060 "zerocopy_threshold": 0, 00:36:30.060 "tls_version": 0, 00:36:30.060 "enable_ktls": false 00:36:30.060 } 00:36:30.061 } 00:36:30.061 ] 00:36:30.061 }, 00:36:30.061 { 00:36:30.061 "subsystem": "vmd", 00:36:30.061 "config": [] 00:36:30.061 }, 00:36:30.061 { 00:36:30.061 "subsystem": "accel", 00:36:30.061 "config": [ 00:36:30.061 { 00:36:30.061 "method": "accel_set_options", 00:36:30.061 "params": { 00:36:30.061 "small_cache_size": 128, 00:36:30.061 "large_cache_size": 16, 00:36:30.061 "task_count": 2048, 00:36:30.061 "sequence_count": 2048, 00:36:30.061 "buf_count": 2048 00:36:30.061 } 00:36:30.061 } 00:36:30.061 ] 00:36:30.061 }, 00:36:30.061 { 00:36:30.061 "subsystem": "bdev", 00:36:30.061 "config": [ 00:36:30.061 { 00:36:30.061 "method": "bdev_set_options", 00:36:30.061 "params": { 00:36:30.061 "bdev_io_pool_size": 65535, 00:36:30.061 "bdev_io_cache_size": 256, 00:36:30.061 "bdev_auto_examine": true, 00:36:30.061 "iobuf_small_cache_size": 128, 00:36:30.061 "iobuf_large_cache_size": 16 00:36:30.061 } 00:36:30.061 }, 00:36:30.061 { 00:36:30.061 "method": "bdev_raid_set_options", 00:36:30.061 "params": { 00:36:30.061 "process_window_size_kb": 1024, 00:36:30.061 "process_max_bandwidth_mb_sec": 0 00:36:30.061 } 00:36:30.061 }, 00:36:30.061 { 00:36:30.061 "method": "bdev_iscsi_set_options", 00:36:30.061 "params": { 00:36:30.061 "timeout_sec": 30 00:36:30.061 } 00:36:30.061 }, 00:36:30.061 { 00:36:30.061 "method": "bdev_nvme_set_options", 00:36:30.061 "params": { 00:36:30.061 "action_on_timeout": "none", 00:36:30.061 "timeout_us": 0, 00:36:30.061 "timeout_admin_us": 0, 00:36:30.061 "keep_alive_timeout_ms": 10000, 00:36:30.061 "arbitration_burst": 0, 00:36:30.061 "low_priority_weight": 0, 00:36:30.061 "medium_priority_weight": 0, 00:36:30.061 "high_priority_weight": 0, 00:36:30.061 "nvme_adminq_poll_period_us": 10000, 00:36:30.061 "nvme_ioq_poll_period_us": 0, 00:36:30.061 "io_queue_requests": 512, 00:36:30.061 "delay_cmd_submit": true, 00:36:30.061 "transport_retry_count": 4, 00:36:30.061 "bdev_retry_count": 3, 00:36:30.061 "transport_ack_timeout": 0, 00:36:30.061 "ctrlr_loss_timeout_sec": 0, 00:36:30.061 "reconnect_delay_sec": 0, 00:36:30.061 "fast_io_fail_timeout_sec": 0, 00:36:30.061 "disable_auto_failback": false, 00:36:30.061 "generate_uuids": false, 00:36:30.061 "transport_tos": 0, 00:36:30.061 "nvme_error_stat": false, 00:36:30.061 "rdma_srq_size": 0, 00:36:30.061 "io_path_stat": false, 00:36:30.061 "allow_accel_sequence": false, 00:36:30.061 "rdma_max_cq_size": 0, 00:36:30.061 "rdma_cm_event_timeout_ms": 0, 00:36:30.061 "dhchap_digests": [ 00:36:30.061 "sha256", 00:36:30.061 "sha384", 00:36:30.061 "sha512" 00:36:30.061 ], 00:36:30.061 "dhchap_dhgroups": [ 00:36:30.061 "null", 00:36:30.061 "ffdhe2048", 00:36:30.061 "ffdhe3072", 00:36:30.061 "ffdhe4096", 00:36:30.061 "ffdhe6144", 00:36:30.061 "ffdhe8192" 00:36:30.061 ] 00:36:30.061 } 00:36:30.061 }, 00:36:30.061 { 00:36:30.061 "method": "bdev_nvme_attach_controller", 00:36:30.061 "params": { 00:36:30.061 "name": "nvme0", 00:36:30.061 "trtype": "TCP", 00:36:30.061 "adrfam": "IPv4", 00:36:30.061 "traddr": "127.0.0.1", 00:36:30.061 "trsvcid": "4420", 00:36:30.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:30.061 "prchk_reftag": false, 00:36:30.061 "prchk_guard": false, 00:36:30.061 "ctrlr_loss_timeout_sec": 0, 00:36:30.061 "reconnect_delay_sec": 0, 00:36:30.061 "fast_io_fail_timeout_sec": 0, 00:36:30.061 "psk": "key0", 00:36:30.061 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:30.061 "hdgst": false, 00:36:30.061 "ddgst": false, 00:36:30.061 "multipath": "multipath" 00:36:30.061 } 00:36:30.061 }, 00:36:30.061 { 00:36:30.061 "method": "bdev_nvme_set_hotplug", 00:36:30.061 "params": { 00:36:30.061 "period_us": 100000, 00:36:30.061 "enable": false 00:36:30.061 } 00:36:30.061 }, 00:36:30.061 { 00:36:30.061 "method": "bdev_wait_for_examine" 00:36:30.061 } 00:36:30.061 ] 00:36:30.061 }, 00:36:30.061 { 00:36:30.061 "subsystem": "nbd", 00:36:30.061 "config": [] 00:36:30.061 } 00:36:30.061 ] 00:36:30.061 }' 00:36:30.061 12:44:52 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:30.061 12:44:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:30.061 [2024-12-10 12:44:52.211959] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:36:30.061 [2024-12-10 12:44:52.212012] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912034 ] 00:36:30.320 [2024-12-10 12:44:52.289631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.320 [2024-12-10 12:44:52.326647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:30.578 [2024-12-10 12:44:52.487866] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:31.145 12:44:53 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:31.145 12:44:53 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:31.145 12:44:53 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:31.145 12:44:53 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:31.145 12:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.145 12:44:53 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:31.145 12:44:53 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:31.145 12:44:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:31.145 12:44:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:31.145 12:44:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.145 12:44:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:31.145 12:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.403 12:44:53 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:31.403 12:44:53 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:31.403 12:44:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:31.403 12:44:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:31.403 12:44:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:31.403 12:44:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:31.403 12:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:31.662 12:44:53 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:31.662 12:44:53 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:31.662 12:44:53 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:31.662 12:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:31.920 12:44:53 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:31.920 12:44:53 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:31.920 12:44:53 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1S9HLhtBSG /tmp/tmp.AwMHgxbecw 00:36:31.920 12:44:53 keyring_file -- keyring/file.sh@20 -- # killprocess 1912034 00:36:31.920 12:44:53 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1912034 ']' 00:36:31.920 12:44:53 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1912034 00:36:31.920 12:44:53 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:31.920 12:44:53 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:31.920 12:44:53 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1912034 00:36:31.920 12:44:53 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:31.920 12:44:53 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:31.920 12:44:53 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1912034' 00:36:31.920 killing process with pid 1912034 00:36:31.920 12:44:53 keyring_file -- common/autotest_common.sh@973 -- # kill 1912034 00:36:31.920 Received shutdown signal, test time was about 1.000000 seconds 00:36:31.920 00:36:31.920 Latency(us) 00:36:31.920 [2024-12-10T11:44:54.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:31.920 [2024-12-10T11:44:54.088Z] =================================================================================================================== 00:36:31.920 [2024-12-10T11:44:54.088Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:31.920 12:44:53 keyring_file -- common/autotest_common.sh@978 -- # wait 1912034 00:36:32.179 12:44:54 keyring_file -- keyring/file.sh@21 -- # killprocess 1910508 00:36:32.179 12:44:54 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1910508 ']' 00:36:32.179 12:44:54 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1910508 00:36:32.179 12:44:54 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:32.179 12:44:54 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:32.179 12:44:54 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1910508 00:36:32.179 12:44:54 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:32.179 12:44:54 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:32.179 12:44:54 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1910508' 00:36:32.179 killing process with pid 1910508 00:36:32.179 12:44:54 keyring_file -- common/autotest_common.sh@973 -- # kill 1910508 00:36:32.179 12:44:54 keyring_file -- common/autotest_common.sh@978 -- # wait 1910508 00:36:32.438 00:36:32.438 real 0m11.935s 00:36:32.438 user 0m29.753s 00:36:32.438 sys 0m2.712s 00:36:32.438 12:44:54 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:32.438 12:44:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:32.438 ************************************ 00:36:32.438 END TEST keyring_file 00:36:32.438 ************************************ 00:36:32.438 12:44:54 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:32.438 12:44:54 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/linux.sh 00:36:32.438 12:44:54 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:32.438 12:44:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:32.438 12:44:54 -- common/autotest_common.sh@10 -- # set +x 00:36:32.438 ************************************ 00:36:32.438 START TEST keyring_linux 00:36:32.438 ************************************ 00:36:32.438 12:44:54 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/linux.sh 00:36:32.438 Joined session keyring: 88099330 00:36:32.438 * Looking for test storage... 00:36:32.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring 00:36:32.697 12:44:54 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:32.697 12:44:54 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:36:32.697 12:44:54 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:32.697 12:44:54 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:32.697 12:44:54 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:32.697 12:44:54 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:32.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.697 --rc genhtml_branch_coverage=1 00:36:32.697 --rc genhtml_function_coverage=1 00:36:32.697 --rc genhtml_legend=1 00:36:32.697 --rc geninfo_all_blocks=1 00:36:32.697 --rc geninfo_unexecuted_blocks=1 00:36:32.697 00:36:32.697 ' 00:36:32.697 12:44:54 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:32.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.697 --rc genhtml_branch_coverage=1 00:36:32.697 --rc genhtml_function_coverage=1 00:36:32.697 --rc genhtml_legend=1 00:36:32.697 --rc geninfo_all_blocks=1 00:36:32.697 --rc geninfo_unexecuted_blocks=1 00:36:32.697 00:36:32.697 ' 00:36:32.697 12:44:54 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:32.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.697 --rc genhtml_branch_coverage=1 00:36:32.697 --rc genhtml_function_coverage=1 00:36:32.697 --rc genhtml_legend=1 00:36:32.697 --rc geninfo_all_blocks=1 00:36:32.697 --rc geninfo_unexecuted_blocks=1 00:36:32.697 00:36:32.697 ' 00:36:32.697 12:44:54 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:32.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:32.697 --rc genhtml_branch_coverage=1 00:36:32.697 --rc genhtml_function_coverage=1 00:36:32.697 --rc genhtml_legend=1 00:36:32.697 --rc geninfo_all_blocks=1 00:36:32.697 --rc geninfo_unexecuted_blocks=1 00:36:32.697 00:36:32.697 ' 00:36:32.697 12:44:54 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/common.sh 00:36:32.697 12:44:54 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:32.697 12:44:54 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:32.697 12:44:54 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.697 12:44:54 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.697 12:44:54 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.697 12:44:54 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:32.697 12:44:54 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:32.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:32.697 12:44:54 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:32.697 12:44:54 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:32.697 12:44:54 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:32.697 12:44:54 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:32.697 12:44:54 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:32.697 12:44:54 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:32.697 12:44:54 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:32.697 12:44:54 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:32.697 12:44:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:32.697 12:44:54 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:32.697 12:44:54 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:32.697 12:44:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:32.698 /tmp/:spdk-test:key0 00:36:32.698 12:44:54 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:32.698 12:44:54 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:32.698 12:44:54 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:32.698 /tmp/:spdk-test:key1 00:36:32.698 12:44:54 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1912586 00:36:32.698 12:44:54 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1912586 00:36:32.698 12:44:54 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:36:32.698 12:44:54 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1912586 ']' 00:36:32.698 12:44:54 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:32.698 12:44:54 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:32.698 12:44:54 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:32.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:32.698 12:44:54 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:32.698 12:44:54 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:32.956 [2024-12-10 12:44:54.863334] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:36:32.956 [2024-12-10 12:44:54.863384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912586 ] 00:36:32.956 [2024-12-10 12:44:54.922128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.956 [2024-12-10 12:44:54.964094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.214 12:44:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:33.214 12:44:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:33.214 12:44:55 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:33.214 12:44:55 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.214 12:44:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:33.214 [2024-12-10 12:44:55.186486] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:33.214 null0 00:36:33.214 [2024-12-10 12:44:55.218537] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:33.214 [2024-12-10 12:44:55.218906] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:33.214 12:44:55 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.215 12:44:55 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:33.215 647496209 00:36:33.215 12:44:55 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:33.215 398241612 00:36:33.215 12:44:55 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1912610 00:36:33.215 12:44:55 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1912610 /var/tmp/bperf.sock 00:36:33.215 12:44:55 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:33.215 12:44:55 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1912610 ']' 00:36:33.215 12:44:55 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:33.215 12:44:55 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:33.215 12:44:55 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:33.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:33.215 12:44:55 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:33.215 12:44:55 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:33.215 [2024-12-10 12:44:55.292903] Starting SPDK v25.01-pre git sha1 92d1e663a / DPDK 24.03.0 initialization... 00:36:33.215 [2024-12-10 12:44:55.292945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912610 ] 00:36:33.215 [2024-12-10 12:44:55.368872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.473 [2024-12-10 12:44:55.409039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:33.473 12:44:55 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:33.473 12:44:55 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:33.473 12:44:55 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:33.473 12:44:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:33.731 12:44:55 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:33.731 12:44:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:33.989 12:44:55 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:33.989 12:44:55 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:33.989 [2024-12-10 12:44:56.071483] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:33.989 nvme0n1 00:36:34.248 12:44:56 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:34.248 12:44:56 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:34.248 12:44:56 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:34.248 12:44:56 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:34.248 12:44:56 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:34.248 12:44:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.248 12:44:56 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:34.248 12:44:56 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:34.248 12:44:56 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:34.248 12:44:56 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:34.248 12:44:56 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.248 12:44:56 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:34.248 12:44:56 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.507 12:44:56 keyring_linux -- keyring/linux.sh@25 -- # sn=647496209 00:36:34.507 12:44:56 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:34.507 12:44:56 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:34.507 12:44:56 keyring_linux -- keyring/linux.sh@26 -- # [[ 647496209 == \6\4\7\4\9\6\2\0\9 ]] 00:36:34.507 12:44:56 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 647496209 00:36:34.507 12:44:56 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:34.507 12:44:56 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:34.765 Running I/O for 1 seconds... 00:36:35.701 21083.00 IOPS, 82.36 MiB/s 00:36:35.701 Latency(us) 00:36:35.701 [2024-12-10T11:44:57.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:35.701 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:35.701 nvme0n1 : 1.01 21083.76 82.36 0.00 0.00 6050.76 5100.41 12651.30 00:36:35.701 [2024-12-10T11:44:57.869Z] =================================================================================================================== 00:36:35.701 [2024-12-10T11:44:57.869Z] Total : 21083.76 82.36 0.00 0.00 6050.76 5100.41 12651.30 00:36:35.701 { 00:36:35.701 "results": [ 00:36:35.701 { 00:36:35.701 "job": "nvme0n1", 00:36:35.701 "core_mask": "0x2", 00:36:35.701 "workload": "randread", 00:36:35.701 "status": "finished", 00:36:35.701 "queue_depth": 128, 00:36:35.701 "io_size": 4096, 00:36:35.701 "runtime": 1.006035, 00:36:35.701 "iops": 21083.75951134901, 00:36:35.701 "mibps": 82.35843559120707, 00:36:35.701 "io_failed": 0, 00:36:35.701 "io_timeout": 0, 00:36:35.701 "avg_latency_us": 6050.763909292348, 00:36:35.701 "min_latency_us": 5100.410434782609, 00:36:35.701 "max_latency_us": 12651.297391304348 00:36:35.701 } 00:36:35.701 ], 00:36:35.701 "core_count": 1 00:36:35.701 } 00:36:35.701 12:44:57 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:35.701 12:44:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:35.960 12:44:57 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:35.960 12:44:57 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:35.960 12:44:57 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:35.960 12:44:57 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:35.960 12:44:57 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.960 12:44:57 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:35.960 12:44:58 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:35.960 12:44:58 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:35.960 12:44:58 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:35.960 12:44:58 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:35.960 12:44:58 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:35.960 12:44:58 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:35.960 12:44:58 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:35.960 12:44:58 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:35.960 12:44:58 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:35.960 12:44:58 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:35.960 12:44:58 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:35.960 12:44:58 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:36.219 [2024-12-10 12:44:58.297868] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:36.219 [2024-12-10 12:44:58.297976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef0c20 (107): Transport endpoint is not connected 00:36:36.219 [2024-12-10 12:44:58.298970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef0c20 (9): Bad file descriptor 00:36:36.219 [2024-12-10 12:44:58.299971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:36.219 [2024-12-10 12:44:58.299982] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:36.219 [2024-12-10 12:44:58.299989] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:36.219 [2024-12-10 12:44:58.299996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:36.219 request: 00:36:36.219 { 00:36:36.219 "name": "nvme0", 00:36:36.219 "trtype": "tcp", 00:36:36.219 "traddr": "127.0.0.1", 00:36:36.219 "adrfam": "ipv4", 00:36:36.219 "trsvcid": "4420", 00:36:36.219 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:36.219 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:36.219 "prchk_reftag": false, 00:36:36.219 "prchk_guard": false, 00:36:36.219 "hdgst": false, 00:36:36.219 "ddgst": false, 00:36:36.219 "psk": ":spdk-test:key1", 00:36:36.219 "allow_unrecognized_csi": false, 00:36:36.219 "method": "bdev_nvme_attach_controller", 00:36:36.219 "req_id": 1 00:36:36.219 } 00:36:36.219 Got JSON-RPC error response 00:36:36.219 response: 00:36:36.219 { 00:36:36.219 "code": -5, 00:36:36.219 "message": "Input/output error" 00:36:36.219 } 00:36:36.219 12:44:58 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:36.219 12:44:58 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:36.219 12:44:58 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:36.219 12:44:58 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@33 -- # sn=647496209 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 647496209 00:36:36.219 1 links removed 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@33 -- # sn=398241612 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 398241612 00:36:36.219 1 links removed 00:36:36.219 12:44:58 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1912610 00:36:36.219 12:44:58 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1912610 ']' 00:36:36.219 12:44:58 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1912610 00:36:36.219 12:44:58 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:36.219 12:44:58 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:36.219 12:44:58 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1912610 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1912610' 00:36:36.478 killing process with pid 1912610 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@973 -- # kill 1912610 00:36:36.478 Received shutdown signal, test time was about 1.000000 seconds 00:36:36.478 00:36:36.478 Latency(us) 00:36:36.478 [2024-12-10T11:44:58.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.478 [2024-12-10T11:44:58.646Z] =================================================================================================================== 00:36:36.478 [2024-12-10T11:44:58.646Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@978 -- # wait 1912610 00:36:36.478 12:44:58 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1912586 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1912586 ']' 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1912586 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1912586 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1912586' 00:36:36.478 killing process with pid 1912586 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@973 -- # kill 1912586 00:36:36.478 12:44:58 keyring_linux -- common/autotest_common.sh@978 -- # wait 1912586 00:36:36.738 00:36:36.738 real 0m4.384s 00:36:36.738 user 0m8.315s 00:36:36.738 sys 0m1.459s 00:36:36.738 12:44:58 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.738 12:44:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:36.738 ************************************ 00:36:36.738 END TEST keyring_linux 00:36:36.738 ************************************ 00:36:36.997 12:44:58 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:36.997 12:44:58 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:36.997 12:44:58 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:36.997 12:44:58 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:36.997 12:44:58 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:36.997 12:44:58 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:36.997 12:44:58 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:36.997 12:44:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:36.997 12:44:58 -- common/autotest_common.sh@10 -- # set +x 00:36:36.997 12:44:58 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:36.997 12:44:58 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:36.997 12:44:58 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:36.997 12:44:58 -- common/autotest_common.sh@10 -- # set +x 00:36:42.273 INFO: APP EXITING 00:36:42.273 INFO: killing all VMs 00:36:42.273 INFO: killing vhost app 00:36:42.273 INFO: EXIT DONE 00:36:44.808 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:44.808 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:44.808 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:48.096 Cleaning 00:36:48.096 Removing: /var/run/dpdk/spdk0/config 00:36:48.096 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:48.096 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:48.096 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:48.096 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:48.096 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:48.096 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:48.096 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:48.096 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:48.096 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:48.096 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:48.096 Removing: /var/run/dpdk/spdk1/config 00:36:48.096 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:48.096 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:48.096 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:48.096 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:48.096 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:48.096 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:48.096 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:48.096 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:48.096 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:48.096 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:48.096 Removing: /var/run/dpdk/spdk2/config 00:36:48.096 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:48.096 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:48.096 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:48.096 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:48.096 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:48.096 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:48.096 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:48.096 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:48.096 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:48.096 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:48.096 Removing: /var/run/dpdk/spdk3/config 00:36:48.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:48.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:48.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:48.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:48.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:48.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:48.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:48.096 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:48.096 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:48.096 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:48.097 Removing: /var/run/dpdk/spdk4/config 00:36:48.097 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:48.097 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:48.097 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:48.097 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:48.097 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:48.097 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:48.097 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:48.097 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:48.097 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:48.097 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:48.097 Removing: /dev/shm/bdev_svc_trace.1 00:36:48.097 Removing: /dev/shm/nvmf_trace.0 00:36:48.097 Removing: /dev/shm/spdk_tgt_trace.pid1433755 00:36:48.097 Removing: /var/run/dpdk/spdk0 00:36:48.097 Removing: /var/run/dpdk/spdk1 00:36:48.097 Removing: /var/run/dpdk/spdk2 00:36:48.097 Removing: /var/run/dpdk/spdk3 00:36:48.097 Removing: /var/run/dpdk/spdk4 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1431603 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1432673 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1433755 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1434392 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1435343 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1435381 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1436421 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1436560 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1436867 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1438439 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1439717 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1440051 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1440303 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1440605 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1440895 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1441147 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1441396 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1441684 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1442427 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1445446 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1445747 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1445952 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1446133 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1446452 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1446670 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1446945 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1447133 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1447401 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1447432 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1447696 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1447709 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1448266 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1448516 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1448817 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1452515 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1456789 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1467545 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1468032 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1472315 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1472782 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1477058 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1482956 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1485768 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1495978 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1504908 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1506865 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1508198 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1525236 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1529213 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1574635 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1579923 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1585785 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1592297 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1592299 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1593214 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1594002 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1594833 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1595517 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1595528 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1595779 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1595982 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1595988 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1596900 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1597812 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1598597 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1599201 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1599206 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1599446 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1600533 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1601688 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1610266 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1639623 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1644136 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1645734 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1647574 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1647806 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1647848 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1648062 00:36:48.097 Removing: /var/run/dpdk/spdk_pid1648566 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1650409 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1651180 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1651676 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1653775 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1654274 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1654987 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1659146 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1664631 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1664633 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1664634 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1668433 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1676782 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1680938 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1687324 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1688618 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1690183 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1691732 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1696213 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1700555 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1704576 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1712168 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1712173 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1716668 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1716902 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1717129 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1717582 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1717588 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1722069 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1722642 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1726990 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1729671 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1735456 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1740844 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1749568 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1756544 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1756551 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1775319 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1775793 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1776475 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1776958 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1777782 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1778335 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1779383 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1779861 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1784061 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1784343 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1790332 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1790461 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1795878 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1800045 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1809868 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1810397 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1814660 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1815078 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1819143 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1824949 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1827930 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1837995 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1846702 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1848307 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1849226 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1865432 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1869388 00:36:48.356 Removing: /var/run/dpdk/spdk_pid1872133 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1880424 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1880551 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1885584 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1887464 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1889312 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1890551 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1892550 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1893612 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1902348 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1902807 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1903277 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1905552 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1906097 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1906654 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1910508 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1910515 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1912034 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1912586 00:36:48.615 Removing: /var/run/dpdk/spdk_pid1912610 00:36:48.615 Clean 00:36:48.615 12:45:10 -- common/autotest_common.sh@1453 -- # return 0 00:36:48.615 12:45:10 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:48.615 12:45:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:48.615 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:36:48.615 12:45:10 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:48.615 12:45:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:48.615 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:36:48.615 12:45:10 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt 00:36:48.615 12:45:10 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/udev.log ]] 00:36:48.615 12:45:10 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/udev.log 00:36:48.615 12:45:10 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:48.615 12:45:10 -- spdk/autotest.sh@398 -- # hostname 00:36:48.616 12:45:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_test.info 00:36:48.874 geninfo: WARNING: invalid characters removed from testname! 00:37:10.965 12:45:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:37:12.866 12:45:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:37:14.772 12:45:36 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:37:16.676 12:45:38 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:37:18.581 12:45:40 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:37:20.484 12:45:42 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:37:22.388 12:45:44 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:22.388 12:45:44 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:22.388 12:45:44 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt ]] 00:37:22.388 12:45:44 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:22.388 12:45:44 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:22.388 12:45:44 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt 00:37:22.388 + [[ -n 1354116 ]] 00:37:22.388 + sudo kill 1354116 00:37:22.398 [Pipeline] } 00:37:22.413 [Pipeline] // stage 00:37:22.418 [Pipeline] } 00:37:22.432 [Pipeline] // timeout 00:37:22.437 [Pipeline] } 00:37:22.450 [Pipeline] // catchError 00:37:22.455 [Pipeline] } 00:37:22.470 [Pipeline] // wrap 00:37:22.475 [Pipeline] } 00:37:22.487 [Pipeline] // catchError 00:37:22.495 [Pipeline] stage 00:37:22.497 [Pipeline] { (Epilogue) 00:37:22.508 [Pipeline] catchError 00:37:22.510 [Pipeline] { 00:37:22.521 [Pipeline] echo 00:37:22.522 Cleanup processes 00:37:22.527 [Pipeline] sh 00:37:22.813 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:37:22.813 1923789 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:37:22.826 [Pipeline] sh 00:37:23.118 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:37:23.118 ++ grep -v 'sudo pgrep' 00:37:23.118 ++ awk '{print $1}' 00:37:23.118 + sudo kill -9 00:37:23.118 + true 00:37:23.148 [Pipeline] sh 00:37:23.436 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:35.658 [Pipeline] sh 00:37:35.942 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:35.942 Artifacts sizes are good 00:37:35.955 [Pipeline] archiveArtifacts 00:37:35.962 Archiving artifacts 00:37:36.093 [Pipeline] sh 00:37:36.379 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:37:36.392 [Pipeline] cleanWs 00:37:36.402 [WS-CLEANUP] Deleting project workspace... 00:37:36.402 [WS-CLEANUP] Deferred wipeout is used... 00:37:36.409 [WS-CLEANUP] done 00:37:36.411 [Pipeline] } 00:37:36.426 [Pipeline] // catchError 00:37:36.437 [Pipeline] sh 00:37:36.736 + logger -p user.info -t JENKINS-CI 00:37:36.778 [Pipeline] } 00:37:36.791 [Pipeline] // stage 00:37:36.796 [Pipeline] } 00:37:36.810 [Pipeline] // node 00:37:36.815 [Pipeline] End of Pipeline 00:37:36.856 Finished: SUCCESS